I have two tables, COURSE and OFFERING. Their columns are:
COURSE (
courseId,
title,
cost,
duration
)
and
OFFERING (
offeringID,
instructor,
startDate,
endDate,
courseId,
locationId
).
I want to configure a trigger that ensures that courses that have duration of 5 days (from duration column of COURSE table) cannot be offered in December (from startDate column of OFFERING table). I came up with the following SQL query:
CREATE OR REPLACE TRIGGER checkDuration
BEFORE INSERT OR UPDATE ON
(course c JOIN offering o
ON
c.courseId = o.courseId)
FOR EACH ROW
BEGIN
IF ((to_char(:new.startDate, 'fmMONTH') = 'DECEMBER') AND duration = 5)
THEN
raise_application_error(-20001, 'Courses of five days duration cannot be run in December');
END IF;
END;
The trigger was created, but with errors.
This worked perfectly.
CREATE OR REPLACE TRIGGER checkDuration
BEFORE INSERT OR UPDATE on offering
FOR EACH ROW
DECLARE
isFound NUMBER;
BEGIN
SELECT 1 INTO isFound FROM DUAL WHERE EXISTS (
SELECT * FROM Course c
WHERE c.courseId = :new.courseId AND c.duration = 5);
IF EXTRACT(MONTH FROM :new.startDate) = 12
THEN RAISE_APPLICATION_ERROR(-20001, 'Courses of five days duration cannot be run in December');
END IF;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
There are no way to link one trigger to two tables unless you create an updatable view which hides both tables map all application code to work with this view. But this solution only useful if you on the start of developing new application from scratch.
If you goal is to keep code only in one place then use a stored procedure or package and call it from each trigger.
create or replace procedure CheckDuration(
pStartdate in date,
pDuration in number
)
is
begin
if( (extract(month from pStartDate) = 12) and (pDuration = 5) ) then
raise_application_error(-20001,
'Courses of five days duration cannot be run in December'
);
end if;
end;
/
CREATE OR REPLACE TRIGGER course_BIU
BEFORE INSERT OR UPDATE ON course for each row
begin
for cCheck in (
select o.StartDate from offering o where o.courseId = :new.courseId
) loop
CheckDuration(cCheck.StartDate, :new.Duration);
end loop;
end;
/
CREATE OR REPLACE TRIGGER offering_BIU
BEFORE INSERT OR UPDATE ON offering for each row
begin
for cCheck in (
select c.Duration from course c where c.courseId = :new.courseId
) loop
CheckDuration(:new.StartDate, cCheck.Duration);
end loop;
end;
For more generic solution you can pass parameters of course%rowtype and offering%rowtype to stored procedure and perform various checks inside.
Related
I have two tables, COPY and BORROW. Their columns are:
COPY (
Copy_id,
Bk_id,
Loc_id,
Opinion
)
and
BORROW (
Cus_evo,
B_Date,
R_Date,
Fee,
Copy_id,
Cus_id
)
I want to configure a trigger that ensures Copies that stored in a Exact location (London) (from Loc-id column of COPY table) cannot be Borrowed in December (from B_Date column of BORROW table).
I have created the following trigger:
CREATE OR REPLACE TRIGGER BORROW_TRIGGER
BEFORE INSERT ON BORROW FOR EACH ROW BEGIN
IF(TO_CHAR(TO_DATE(:NEW.B_Date, 'DD-MMM-YYYY'),'MMM'= 'DEC')
AND :NEW.Loc_id='LC0001')
THEN RAISE_APPLICATION_ERROR(-20669,'CANNOT BORROW BOOKS FROM LONDON STORE DURING MONTH DECEMBER');
END IF;
END;
/
The trigger is not created and have errors please cloud give me correct trigger for this??
Error i have experiencing
Errors: TRIGGER BORROW_TRIGGER
Line/Col: 3/5 PLS-00049: bad bind variable 'NEW.LOC_ID'
Since you only need to perform the check when b_date is in December, it's more efficient to add this as a when condition at the top of the trigger. This also simplifies the trigger logic.
create or replace trigger borrow_check_trg
before insert on borrow
for each row
when (to_char(new.b_date,'MM') = '12')
declare
l_loc_id copy.loc_id%type;
begin
select c.loc_id into l_loc_id
from copy c
where c.copy_id = :new.copy_id;
if l_loc_id = 'LC0001' then
raise_application_error(-20669, 'Books cannot be borrowed from the London store during December');
end if;
end;
You need to query the COPY table to get the field you need:
CREATE OR REPLACE TRIGGER BORROW_BI
BEFORE INSERT ON BORROW
FOR EACH ROW
DECLARE
strLoc_id COPY.LOC_ID%TYPE;
BEGIN
SELECT LOC_ID
INTO strLoc_id
FROM DUAL
LEFT OUTER JOIN COPY c
ON c.COPY_ID = :NEW.COPY_ID;
IF TO_CHAR(TO_DATE(:NEW.B_Date, 'DD-MMM-YYYY'), 'MMM') = 'DEC' AND
strLoc_id = 'LC0001'
THEN
RAISE_APPLICATION_ERROR(-20669,'CANNOT BORROW BOOKS FROM LONDON STORE DURING MONTH DECEMBER');
END IF;
END BORROW_BI;
You can check out the existence by using a Select statement with COUNT aggregation from the other table(copy) through use of common column (copy_id) among tables such as
CREATE OR REPLACE TRIGGER Trg_Borrow_Trigger_BI
BEFORE INSERT ON borrow
FOR EACH ROW
DECLARE
v_exists INT;
BEGIN
SELECT COUNT(*)
INTO v_exists
FROM copy
WHERE copy_id = :NEW.copy_id
AND loc_id = 'LC0001'
AND TO_CHAR( :NEW.b_Date, 'MM' ) = '12';
IF v_exists > 0 THEN
RAISE_APPLICATION_ERROR(-20669,
'CANNOT BORROW BOOKS FROM LONDON STORE DURING MONTH DECEMBER');
END IF;
END;
/
where
TO_DATE() conversion is superfluous
to prepend loc_id with :NEW is not possible, since the trigger is created for
the table borrow has not this column, while copy table has.
I am trying to create a trigger that I have problems with.
The triggers work is to stop the item from inserting.
Here there are 2 tables of student and subjects.
I have to check and prevent inserting when the student has not enrolled in that subject and if he has he will start it from the next semester so currently he is not enrolled there.
Please help me with this.
CREATE OR REPLACE TRIGGER check_student
BEFORE INSERT ON subject
FOR EACH ROW
BEGIN
IF :new.student_no
AND :new.student_name NOT IN (
SELECT DISTINCT student_no,student_name
FROM student
)
OR :new.student_enrollment_date < (
select min(enrolment_date)
from student
where
student.student_name = :new.student_name
and student.student_no = :new.guest_no
group by student.student_name , student.student_no
)
AND :new.student_name = (
select distinct student_name
from student
)
AND :new.student_no = (
select distinct student_no
from student
)
THEN
raise_application_error(-20000, 'Person must have lived there');
END IF;
END;
/
I have to check and prevent inserting when the student has not enrolled in that subject and if he has he will start it from the next semester so currently he is not enrolled there.
Please help me with this.
You probably have a logical prescedence issues in your conditions, since it contains ANDs and ORs without any parentheses. Also, you are checking scalar values against subqueries that return more than one row or more than one column, this will generate runtime errors.
But overall, I think that your code can be simplified by using a unique NOT EXISTS condition with a subquery that checks if all functional conditions are satisfied at once. This should be pretty close to what you want:
create or replace trigger check_student
before insert on subject
for each row
begin
if not exists (
select 1
from student
where
name = :new.student_name
and student_no = :new.student_no
and enrolment_date <= :new.student_enrollment_date
) then
raise_application_error(-20000, 'Person must have livd there');
end if;
end;
/
Note: it is unclear what is the purpose of colum :new.guest_no, so I left it apart from the time being (I assumed that you meant :new.student_no instead).
Your code is quite unclear so I am just using the same conditions as used by you in your question to let you know how to proceed.
CREATE OR REPLACE TRIGGER CHECK_STUDENT BEFORE
INSERT ON SUBJECT
FOR EACH ROW
DECLARE
LV_STU_COUNT NUMBER := 0; --declared the variables
LV_STU_ENROLLED_FUTURE NUMBER := 0;
BEGIN
SELECT
COUNT(1)
INTO LV_STU_COUNT -- assigning value to the variable
FROM
STUDENT
WHERE
STUDENT_NO = :NEW.STUDENT_NO
AND STUDENT_NAME = :NEW.STUDENT_NAME;
-- ADDED BEGIN EXCEPTION BLOCK HERE
BEGIN
SELECT
COUNT(1)
INTO LV_STU_ENROLLED_FUTURE -- assigning value to the variable
FROM
STUDENT
WHERE
STUDENT.STUDENT_NAME = :NEW.STUDENT_NAME
AND STUDENT.STUDENT_NO = :NEW.GUEST_NO
GROUP BY
STUDENT.STUDENT_NAME,
STUDENT.STUDENT_NO
HAVING
MIN(ENROLMENT_DATE) > :NEW.STUDENT_ENROLLMENT_DATE;
EXCEPTION WHEN NO_DATA_FOUND THEN
LV_STU_ENROLLED_FUTURE := 0;
END;
-- OTHER TWO CONDITIONS SAME LIKE ABOVE, IF NEEDED
-- using variables to raise the error
IF LV_STU_COUNT = 0 OR ( LV_STU_ENROLLED_FUTURE >= 1 /* AND OTHER CONDITIONS*/ ) THEN
RAISE_APPLICATION_ERROR(-20000, 'Person must have livd there');
END IF;
END;
/
Cheers!!
I work for a company that has a DW - ETL setup. I need to write a query that looks for over 2500+ values in an WHEN - IN clause and also over 1000+ values in a WHERE - IN clause. Basically it would look like the following:
SELECT
,user_id
,CASE WHEN user_id IN ('user_n', +2500 user_[n+1] ) THEN 1
ELSE 0
,item_id
FROM user_table
WHERE item_id IN ('item_n', +1000 item_[n+1] );
As you probably already know PL/SQL allows a maximum of 1000 values in an IN clause, so I tried adding OR - IN clauses (as suggested in other stackoverflow threads):
SELECT
,user_id
,CASE WHEN user_id IN ('user_n', +999 user_[n+1] )
OR user_id IN ('user_n', +999 user_[n+1] )
OR user_id IN ('user_n', +999 user_[n+1] ) THEN 1
ELSE 0 END AS user_group
,item_id
FROM user_table
WHERE item_id IN ('item_n', +999 item_[n+1] )
OR item_id IN ('item_n', +999 item_[n+1] );
NOTE: i know the math is erroneous in the examples above, but you get the point
The problem is that queries have a maximum executing time of 120 minutes and the job is being automatically killed. So I googled what solutions I could find and it seems Temporary Tables could be the solution I'm looking for, but with all honesty none of the examples I found is clear enough on how to include the values I want in the table and also how to use this table in my original query. Not even the ORACLE documentation was of much help.
Another potential problem is that I have limited rights and I've seen other people mention that in their companies they don't have the rights to create temporary tables.
Some of the info I found in my research:
ORACLE documentation
StackOverflow thread
[StackOverflow thread 2]
Another solution I found was using tuples instead, as mentioned in THIS thread (which I haven't tried) because as another user mentions performance seems greatly affected.
Any guidance on how to use a Temporary Table or if anyone has another way of dealing with this limitation would be greatly appreciated.
Create a global temporary table so no undo logs are created
CREATE GLOBAL TEMPORARY TABLE <table_name> (
<column_name> <column_data_type>,
<column_name> <column_data_type>,
<column_name> <column_data_type>)
ON COMMIT DELETE ROWS;
then depending on how the user list arrives import the data into a holding table and then run
select 'INSERT INTO global_temporary_table <column> values '
|| holding_table.column
||';'
FROM holding_table.column;
This gives you insert statements as output which you run to insert the data.
then
SELECT <some_column>
FROM <some_table>
WHERE <some_value> IN
(SELECT <some_column> from <global_temporary_table>
Use a collection:
CREATE TYPE Ints_Table AS TABLE OF INT;
CREATE TYPE IDs_Table AS TABLE OF CHAR(5);
Something like this:
SELECT user_id,
CASE WHEN user_id MEMBER OF Ints_Table( 1, 2, 3, /* ... */ 2500 )
THEN 1
ELSE 0
END
,item_id
FROM user_table
WHERE item_id MEMBER OF IDs_table( 'ABSC2', 'DITO9', 'KMKM9', /* ... */ 'QD3R5' );
Or you can use PL/SQL to populate a collection:
VARIABLE cur REFCURSOR;
DECLARE
t_users Ints_Table;
t_items IDs_Table;
f UTL_FILE.FILE_TYPE;
line VARCHAR2(4000);
BEGIN
t_users.EXTEND( 2500 );
FOR i = 1 .. 2500 LOOP
t_users( t_users.COUNT ) := i;
END LOOP;
// load data from a file
f := UTL_FILE.FOPEN('DIRECTORY_HANDLE','datafile.txt','R');
IF UTL_FILE.IS_OPEN(f) THEN
LOOP
UTL_FILE.GET_LINE(f,line);
IF line IS NULL THEN EXIT; END IF;
t_items.EXTEND;
t_items( t_items.COUNT ) := line;
END LOOP;
OPEN :cur FOR
SELECT user_id,
CASE WHEN user_id MEMBER OF t_users
THEN 1
ELSE 0
END
,item_id
FROM user_table
WHERE item_id MEMBER OF t_items;
END;
/
PRINT cur;
Or if you are using another language to call the query then you could pass the collections as a bind value (as shown here).
In PL/SQL you could use a collection type. You could create your own like this:
create type string_table is table of varchar2(100);
Or use an existing type such as SYS.DBMS_DEBUG_VC2COLL which is a table of VARCHAR2(1000).
Now you can declare a collection of this type for each of your lists, populate it, and use it in the query - something like this:
declare
strings1 SYS.DBMS_DEBUG_VC2COLL := SYS.DBMS_DEBUG_VC2COLL();
strings2 SYS.DBMS_DEBUG_VC2COLL := SYS.DBMS_DEBUG_VC2COLL();
procedure add_string1 (p_string varchar2) is
begin
strings1.extend();
strings1(strings.count) := p_string;
end;
procedure add_string2 (p_string varchar2) is
begin
strings2.extend();
strings2(strings2.count) := p_string;
end;
begin
add_string1('1');
add_string1('2');
add_string1('3');
-- and so on...
add_string1('2500');
add_string2('1');
add_string2('2');
add_string2('3');
-- and so on...
add_string2('1400');
for r in (
select user_id
, case when user_id in table(strings2) then 1 else 0 end as indicator
, item_id
from user_table
where item_id in table(strings1)
)
loop
dbms_output.put_Line(r.user_id||' '||r.indicator);
end loop;
end;
/
You can use below example to understand Global temporary tables and the type of GTT.
CREATE GLOBAL TEMPORARY TABLE GTT_PRESERVE_ROWS (ID NUMBER) ON COMMIT PRESERVE ROWS;
INSERT INTO GTT_PRESERVE_ROWS VALUES (1);
COMMIT;
SELECT * FROM GTT_PRESERVE_ROWS;
DELETE FROM GTT_PRESERVE_ROWS;
COMMIT;
TRUNCATE TABLE GTT_PRESERVE_ROWS;
DROP TABLE GTT_PRESERVE_ROWS;--WONT WORK IF YOU DIDNOT TRUNCATE THE TABLE OR THE TABLE IS BEING USED IN SOME OTHER SESSION
CREATE GLOBAL TEMPORARY TABLE GTT_DELETE_ROWS (ID NUMBER) ON COMMIT DELETE ROWS;
INSERT INTO GTT_DELETE_ROWS VALUES (1);
SELECT * FROM GTT_DELETE_ROWS;
COMMIT;
SELECT * FROM GTT_DELETE_ROWS;
DROP TABLE GTT_DELETE_ROWS;
However as you mentioned you receive the input in an excel file so you can simply create a table and load data in that table. Once the data is loaded you can use the data in IN clause of your query.
select * from employee where empid in (select empid from temptable);
create temporary table userids (userid int);
insert into userids(...)
then a join or in subquery
select ...
where user_id in (select userid from userids);
drop temporary table userids;
I've two tables purchases and customers, I need to update visits_made (number) in customers if time of purchase ptime (date) in purchases table is different from the already existing last_visit (date) in customers table.
Here is the trigger that I'm trying and I'm doing something terribly and shamefully wrong.
create or replace trigger update_visits_made
after insert on purchases
for each row
declare new_date purchases.ptime%type;
begin
select ptime into new_date
from purchases
where purchases.ptime = :new.ptime;
if new_date = customers.last_visit then
new.last_visit=old.last_visit;
else
update customers
set visits_made=visits_made+1
where purchases.ptime=:new.ptime;
end if;
end;
/
show errors
Can anybody please tell me where I'm going wrong?
I get following errors
LINE/COL ERROR
10/15 PLS-00103: Encountered the symbol "=" when expecting one of the
following:
:= . ( # % ;
11/1 PLS-00103: Encountered the symbol "ELSE"
16/1 PLS-00103: Encountered the symbol "END"
This is a scalar assignment in PL/SQL:
new.last_visit = old.last_visit;
Not only should this be done with := but new and old should have colons before their names:
:new.last_visit := :old.last_visit;
Once you have fixed that problem, then the update will pose an issue:
update customers
set visits_made=visits_made+1
where purchases.ptime = :new.ptime;
It is unclear to me what this is supposed to be doing, so I can't suggest anything, other than pointing out that purchases is not defined.
I think somehow i get your requirement. Basically its a ticker which count the vists of user based on Login dates. I have written a snippet below which replicates the same scenario as mentioned Above. Let me know if this helps.
-- Table creation and insertion script
CREATE TABLE PURCHASES
(
P_ID NUMBER,
P_TIME DATE
);
INSERT INTO PURCHASES
SELECT LEVEL,SYSDATE+LEVEL FROM DUAL
CONNECT BY LEVEL < 10;
CREATE TABLE CUSTOMERS
(
P_ID NUMBER,
P_VISITS NUMBER
);
INSERT INTO CUSTOMERS
SELECT LEVEL,NULL FROM DUAL
CONNECT BY LEVEL < 10;
-- Table creation and insertion script
--Trigger Script
CREATE OR REPLACE TRIGGER update_purchase BEFORE
INSERT ON purchases FOR EACH row
DECLARE
new_date purchases.p_time%type;
BEGIN
BEGIN
SELECT A.P_TIME
INTO new_date
FROM
(SELECT p_time,
ROW_NUMBER() OVER(PARTITION BY P_ID ORDER BY P_TIME DESC) RNK
-- INTO new_date
FROM purchases
WHERE purchases.p_id = :new.p_id
)a
WHERE A.RNK =1;
EXCEPTION WHEN OTHERS THEN
RETURN;
END;
IF :NEW.P_TIME <> new_date THEN
UPDATE customers
SET P_VISITS =NVL(P_VISITS,0)+1
WHERE p_id=:new.p_id;
END IF;
END;
--Trigger Script
--Testing Script
INSERT INTO PURCHASES VALUES
(9,TO_DATE('12/11/2015','MM/DD/YYYY'));
--Testing Script
In Oracle 11g, I am using the following in a procedure.. can someone please provide a better solution to achieve the same results.
FOR REC IN
(SELECT E.EMP FROM EMPLOYEE E
JOIN
COMPANY C ON E.EMP=C.EMP
WHERE C.FLAG='Y')
LOOP
UPDATE EMPLOYEE SET FLAG='Y' WHERE EMP=REC.EMP;
END LOOP;
Is there a more efficient/better way to do this? I feel as if this method will run one update statement for each record found (Please correct me if I am wrong).
Here's the is actual code in full:
create or replace
PROCEDURE ACTION_MSC AS
BEGIN
-- ALL MIGRATED CONTACTS, CANDIDATES, COMPANIES, JOBS
-- ALL MIGRATED CANDIDATES, CONTACTS
FOR REC IN (SELECT DISTINCT AC.PEOPLE_HEX
FROM ACTION AC JOIN PEOPLE P ON AC.PEOPLE_HEX=P.PEOPLE_HEX
WHERE P.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE PEOPLE_HEX=REC.PEOPLE_HEX;
END LOOP;
-- ALL MIGRATED COMPANIES
FOR REC IN (SELECT DISTINCT AC.COMPANY_HEX
FROM ACTION AC JOIN COMPANY CM ON AC.COMPANY_HEX=CM.COMPANY_HEX
WHERE CM.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE COMPANY_HEX=REC.COMPANY_HEX;
END LOOP;
-- ALL MIGRATED JOBS
FOR REC IN (SELECT DISTINCT AC.JOB_HEX
FROM ACTION AC JOIN "JOB" J ON AC.JOB_HEX=J.JOB_HEX
WHERE J.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE JOB_HEX=REC.JOB_HEX;
END LOOP;
COMMIT;
END ACTION_MSC;
You're right, it will do one update for each record found. Looks like you could just do:
UPDATE EMPLOYEE SET FLAG = 'Y'
WHERE EMP IN (SELECT EMP FROM COMPANY WHERE FLAG = 'Y')
AND FLAG != 'Y';
A single update will generally be faster and more efficient than multiple individual row updates in a loop; see this answer for another example. Apart from anything else, you're reducing the number of context switches between PL/SQL and SQL, which add up if you have a lot of rows. You could always benchmark this with your own data, of course.
I've added a check of the current flag state so you don't do a pointless update with no chamges.
It's fairly easy to compare the approaches to see that a single update is faster than one in a loop; with some contrived data:
create table people (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into people (id, people_hex, to_migrate)
select level, to_char(level - 1, 'xx'), 'Y'
from dual
connect by level <= 100;
create table action (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into action (id, people_hex, to_migrate)
select level, to_char(mod(level, 200), 'xx'), 'N'
from dual
connect by level <= 500000;
All of these will update half the rows in the action table. Updating in a loop:
begin
for rec in (select distinct ac.people_hex
from action ac join people p on ac.people_hex=p.people_hex
where p.to_migrate='Y')
loop
update action set to_migrate='Y' where people_hex=rec.people_hex;
end loop;
end;
/
Elapsed: 00:00:10.87
Single update (after rollback; I've left this in a block to mimic your procedure):
begin
update action set to_migrate = 'Y'
where people_hex in (select people_hex from people where to_migrate = 'Y');
end;
/
Elapsed: 00:00:07.14
Merge (after rollback):
begin
merge into action a
using (select people_hex, to_migrate from people where to_migrate = 'Y') p
on (a.people_hex = p.people_hex)
when matched then update set a.to_migrate = p.to_migrate;
end;
/
Elapsed: 00:00:07.00
There's some variation from repeated runs, particularly that update and merge are usually pretty close but sometimes swap which is faster in my environment; but both are always significantly faster than updating in a loop. You can repeat this in your own environment and with your own data spread and volumes, and you should if performance is that critical; but a single update is going to be faster than the loop. Whether you use update or merge isn't likely to make much difference.