I have a small table looking like this
table People
( name VARCHAR(20) PRIMARY KEY
,group NUMBER(4)
);
And i need to create trigger (or triggers) thats will allow below rules to work:
- 1 if there are more then 10 names in group i need to raise an error if someone tries to INSERT next people for this group.
- 2 if INSERT comes with NULL value for group field i need to assign it to group which count is less then 10.
- 3 if there are 10 names in all groups i need to generate next group number.
- 4 I need to avoid mutating table error.
This is what i've done till now :
CREATE OR REPLACE TRIGGER people_bis
BEFORE INSERT ON people
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
g_count NUMBER(4);
g_num NUMBER(4);
g_l NUMBER(4);
g_r NUMBER(4);
BEGIN
Select count(*) into g_count from people where group = :new.group;
If g_count > 9 Then
raise_application_error (-20003,'Group reached it limit, please choose other');
End if;
If :NEW.group = '' or :NEW.group is null Then
select count (*) into g_l from (select count(imie),group from people group by group having count(name) = 10);
select count (distinct group) into g_r from people;
if g_l = g_r then
select max(group)+1 into g_num from people;
else
select group into g_num from(select group, count(name) from people having count(name) < 10 group by group order by count(group) desc) where rownum < 2;
End if;
:New.group := g_num;
End if;
End people_bis;
Code above works, but when i INSERT as select from a mirror table e.g.
INSERT INTO people(name) select concat(name,'_next') from people_mirror;
The result is that it exceed the given limit (10) for a group.
Also i know that using PRAGMA AUTONOMOUS_TRANSACTION is not a best way to avoid mutating table error, and i know i could reach this functionality if i would split row trigger into statement triggers but i'm just out off idea how to get it.
So anyone? ;)
Thanks in advance.
-------------------EDIT------------------------
Those are the triggers that works but still i have doubts about them as both of them are BEFORE and row type.
CREATE OR REPLACE TRIGGER people_bir1
BEFORE INSERT on people
FOR EACH ROW
DECLARE
V_count NUMBER(2);
BEGIN
If :NEW.group = '' or :NEW.group is null then
return;
end if;
insert into groups values(:New.group,1);
exception when dup_val_on_index then
Select count into v_count from groups where group = :New.group;
UPDATE groups set count = v_count+1 where group = :New.group;
END people_bir1;
CREATE OR REPLACE TRIGGER people_bir2
BEFORE INSERT on people
FOR EACH ROW
DECLARE
g_count NUMBER(2);
g_num NUMBER(2);
begin
if :NEW.group = '' or :NEW.group is null Then
select min(count) into g_count from groups;
if g_count = 10 Then
select max(group) into g_num from groups;
g_num := g_num+1;
Else
select min(group) into g_num from group where count = g_count;
End if;
:New.group := g_num;
Else
select count into g_count from groups where group=:New.group;
if g_count > 9 then
raise_application_error (-20003,'More then 10 people in group please select another');
end if;
end if;
end people_bir2;
as it is too long i couldn't paste it as a comment to #TonyAndrews answer.
You are right that adding PRAGMA AUTONOMOUS_TRANSACTION is no solution. One way to do this is to maintain a count of people per group in the GROUPS table (if you don't have a GROUPS table then you could add one) using triggers on PEOPLE:
After INSERT on PEOPLE: update GROUPS, add 1 to count of group they are in
After DELETE on PEOPLE: update GROUPS, subtract 1 from count of group they are in
After UPDATE on PEOPLE: update GROUPS, add 1 to new group, subtract 1 from old group
Then your BEFORE INSERT trigger doesn't need to look at the PEOPLE table, it can look at GROUPS:
Select people_count into g_count from groups where group = :new.group
for update;
Note the for update clause to lock the GROUPS row until your transaction completes.
You can use a compund trigger. It looks like this:
CREATE OR REPLACE TRIGGER people_bis
FOR INSERT ON people
COMPOUND TRIGGER
g_count NUMBER(4);
g_num NUMBER(4);
g_l NUMBER(4);
g_r NUMBER(4);
BEFORE STATEMENT IS
BEGIN
Select count(*) into g_count from people where group = :new.group;
If g_count > 9 Then
raise_application_error (-20003,'Group reached it limit, please choose other');
End if;
select count (*) into g_l from (select count(imie),group from people group by group having count(name) = 10);
select count (distinct group) into g_r from people;
if g_l = g_r then
select max(group)+1 into g_num from people;
else
select group into g_num from(select group, count(name) from people having count(name) < 10 group by group order by count(group) desc) where rownum < 2;
End if;
END BEFORE STATEMENT;
BEFORE EACH ROW IS
BEGIN
If :NEW.group = '' or :NEW.group is null Then
:New.group := g_num;
End if;
END BEFORE EACH ROW;
End people_bis;
Please note, most probably this code does not work as you wanted but it should give you a general impression how to work with a Compound Trigger.
Related
I'd like get a count of every possible value from one table associated with each possible value from another table. So if my (combined) table basically look like:
Order ID Employee Product Category
-------------------------------------------
1 Alan Automobile
2 Barry Beauty
3 Charlie Clothing
4 Alan Beauty
I would like to be able to query and get a result of:
Employee Count Auto Count Beauty Count Clothing
------------------------------------------------------------
Alan 1 1 0
Barry 0 1 0
Charlie 0 0 1
I could manually query for each count, but then if I later add new product categories, it will no longer work. What I'm doing now is basically just:
SELECT employee, category, COUNT(*) FROM sales GROUP BY employee, category;
Which returns:
Employee Category Count
-------------------------------
Alan Automobile 1
Alan Beauty 1
Alan Clothing 0
etc. But with a large number of categories this can get a bit redundant. Is there any way to have it returned as a single row for each employee with a column for every category?
You can use JSON approach
SELECT employee,
json_object_agg(ProductCategory,total ORDER BY ProductCategory)
FROM (
SELECT employee, ProductCategory, count(*) AS total
FROM tbl
GROUP BY employee,ProductCategory
) s
GROUP BY employee
ORDER BY employee;
or with two step approach
CREATE FUNCTION dynamic_pivot(central_query text, headers_query text)
RETURNS refcursor AS
$$
DECLARE
left_column text;
header_column text;
value_column text;
h_value text;
headers_clause text;
query text;
j json;
r record;
curs refcursor;
i int:=1;
BEGIN
-- find the column names of the source query
EXECUTE 'select row_to_json(_r.*) from (' || central_query || ') AS _r' into j;
FOR r in SELECT * FROM json_each_text(j)
LOOP
IF (i=1) THEN left_column := r.key;
ELSEIF (i=2) THEN header_column := r.key;
ELSEIF (i=3) THEN value_column := r.key;
END IF;
i := i+1;
END LOOP;
-- build the dynamic transposition query (based on the canonical model)
FOR h_value in EXECUTE headers_query
LOOP
headers_clause := concat(headers_clause,
format(chr(10)||',min(case when %I=%L then %I::text end) as %I',
header_column,
h_value,
value_column,
h_value ));
END LOOP;
query := format('SELECT %I %s FROM (select *,row_number() over() as rn from (%s) AS _c) as _d GROUP BY %I order by min(rn)',
left_column,
headers_clause,
central_query,
left_column);
-- open the cursor so the caller can FETCH right away
OPEN curs FOR execute query;
RETURN curs;
END
$$ LANGUAGE plpgsql;
then
=> BEGIN;
-- step 1: get the cursor (we let Postgres generate the cursor's name)
=> SELECT dynamic_pivot(
'SELECT employee,ProductCategory,count(*)
FROM tbl GROUP BY employee,ProductCategory
ORDER BY 1',
'SELECT DISTINCT productCategory FROM tbl ORDER BY 1'
) AS cur
\gset
-- step 2: read the results through the cursor
=> FETCH ALL FROM :"cur";
Reference
I have to fetch bulk record and insert into table using loop
I have little confusion how to fetch and insert record using loop. Below I have shared what I have done so far .
declare
stud_Id varchar;
begin
stud_Id := select student_id from student_backup where is_active_flg ='Y';
for i in 1 ..stud_Id.count
loop
insert into users(student_id,password,status) values(stud_Id(i),'password','status')
where not exists (select student_id from users where student_id=stud_Id(i))
end loop;
commit;
end;
/
You can use the following :
declare
stud_Id student_backup.student_id%type;
begin
select nvl(max(student_id),0) into stud_Id
from student_backup
where is_active_flg ='Y';
if stud_Id >0 then
for i in 1 ..stud_Id
loop
insert into users(student_id,password,status)
select b.student_id,'password','status'
from student_backup b
left join users u on b.student_id = u.student_id
where is_active_flg ='Y'
and b.student_id = i;
end loop;
end if;
commit;
end;
/
Demo
P.S. If I understood you want to perform, you don't need to use for loop(including if statement) and the select statement in the beginning, but directly apply the insert statement by removing the part and b.student_id = i.
So, convert your block to the one as below :
declare
stud_Id student_backup.student_id%type;
begin
insert into users(student_id,password,status)
select b.student_id,'password','status'
from student_backup b
left join users u on b.student_id = u.student_id
where is_active_flg ='Y' ;
commit;
end;
/
Abdul,
I think you are searching for the following:
BEGIN
INSERT INTO USERS
SELECT STUDENT_ID, PASSWORD , STATUS
FROM student_backup
WHERE STUDENT_ID NOT IN (SELECT STUDENT_ID FROM USERS)
AND is_active_flg = 'Y';
END;
/
Hope, this will be useful.
Demo
Can I update the result of query easily?
Assume I have big query which returns salary column and I need update salaries based on this query results.
ID- is primary key for my table
Now I,m doing it like this:
STEP 1
select id from mytable ...... where something
STEP 2
update mytable set salary=1000 where id in (select id from mytable ...... where something)
Is there exists alternative to do that easily?
Try for update and current of. You said that you are looking for something like "updating data on grid"
create table my_table( id number, a varchar2(10), b varchar2(10));
insert into my_table select level, 'a', 'b' from dual connect by level <=10;
select * from my_table;
declare
rec my_table%rowtype;
cursor c_cursor is select * from my_table for update;
begin
open c_cursor;
loop
fetch c_cursor into rec;
exit when c_cursor%notfound;
if rec.id in (1,3,5) then
rec.a := rec.a||'x';
rec.b := rec.b||'+';
update my_table set row = rec where current of c_cursor;
else
delete from my_table where current of c_cursor;
end if;
end loop;
commit;
end;
select * from my_table;
Yes , you can directly update the result easily.
Here is example :
update
(
select salary from mytable ...... where something
) set salary=1000
I am trying to write trigger which will control if a record is already in table or not. If is the record already in table (compare for example by name), so current record set valid='False' and insert new. Is there any way?
This is my idea, but it doesn't work.
create or replace TRIGGER
Check_r
before insert on t$customer
FOR each ROW
declare
v_dup number;
v_com number;
v_id number;
v_id_new number;
begin
v_date:=SYSDATE;
select count(id) INTO v_dup from t$customer where surname=:NEW.surname ;
select count(id) INTO v_com from t$customer where firstname =:NEW.firstname and
address=:NEW.address;
select id into v_id from t$customer where surname=:NEW.surname;
if v_dup > 0 and v_com=0 then
v_id_new:= m$_GET_ID; -- get id
update t$customer set valid = 'False' where id = v_id;
insert into t$customer ( id, surname ,firstname, valid, address ) values (v_id_new,:NEW.surname ,:NEW.firstname, :NEW.valid, :NEW.address);
end if;
if v_dup = 0 then
v_id_new:= m$_GET_ID; -- get id
insert into t$customer ( id, surname ,firstname, valid , address) values (v_id_new,:NEW.surname ,:NEW.firstname, :NEW.valid, :NEW.address);
end if;
end;
You can use a compound trigger, for example:
CREATE OR REPLACE TRIGGER Check_r
FOR INSERT ON t$customer
COMPOUND TRIGGER
TYPE customerRecordType IS RECORD(
surname t$customer.surname%TYPE,
firstname t$customer.firstname%TYPE,
address t$customer.address%TYPE,
ID nubmer);
TYPE customerTableType IS TABLE OF customerRecordType;
customerTable customerTableType := customerTableType();
n NUMBER;
BEFORE STATEMENT IS
BEGIN
customerTable.DELETE; -- not requried, just for better understanding
END STATEMENT;
BEFORE EACH ROW IS
BEGIN
customerTable.EXTEND;
customerTable(customerTable.LAST).surname := :NEW.surname;
customerTable(customerTable.LAST).firstname := :NEW.firstname;
customerTable(customerTable.LAST).address := :NEW.address;
customerTable(customerTable.LAST).ID := m$_GET_ID;
:NEW.ID := customerTable(customerTable.LAST).ID;
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
FOR i IN customerTable.FIRST..customerTable.LAST LOOP
SELECT COUNT(*) INTO n
FROM t$customer
WHERE surname = customerTable(i).surname;
IF n > 1 THEN
UPDATE t$customer
SET valid = 'False'
WHERE surname = customerTable(i).surname;
END IF;
SELECT COUNT(*) INTO n
FROM t$customer
WHERE firstname = customerTable(i).firstname
AND address = customerTable(i).address;
IF n > 1 THEN
UPDATE t$customer
SET valid = 'False'
WHERE firstname = customerTable(i).firstname
AND address = customerTable(i).address
END IF;
END LOOP;
END AFTER STATEMENT;
END;
/
Please note, this solution is ugly and poor in terms of performance!
But it should give you an impression how it works.
In general you should put all this into a PL/SQL Procedure instead of a trigger.
First, this is trigger for insert. you don't need to write insert statement.
Second, you need to update old record. Just update it with your where clause.
CREATE OR REPLACE TRIGGER Check_r
before insert on t$customer
FOR each ROW
BEGIN
UPDATE t$customer set valid = 'False'
WHERE surname = :NEW.surname
AND firstname =:NEW.firstname;
:NEW.id := m$_GET_ID;
END;
I want to fill the value of product_id. If article_code is not in the table, it executes the insert, but if record exists I don't know how to select the id of that record and assign to product_id.
The table "core_product" looks like that:
id
article_code
Here the code (inside of a function):
DECLARE
product_id int;
BEGIN
INSERT INTO core_product(article_code)
SELECT NEW.article_code
WHERE NOT EXISTS (
SELECT id INTO product_id
FROM core_product
WHERE article_code = NEW.article_code
)
RETURNING id INTO product_id;
END
Use a special variable FOUND:
DECLARE
product_id int;
BEGIN
SELECT id INTO product_id
FROM core_product
WHERE article_code = NEW.article_code;
IF NOT FOUND THEN
INSERT INTO core_product(article_code)
SELECT NEW.article_code
RETURNING id INTO product_id;
END IF;
END
If there is an unique constraint on article_code, you can harden the function against a race condition using retry loop (as Craig suggested in a comment):
BEGIN
LOOP
SELECT id INTO product_id
FROM core_product
WHERE article_code = NEW.article_code;
IF FOUND THEN
EXIT; -- exit loop
END IF;
BEGIN
INSERT INTO core_product(article_code)
SELECT NEW.article_code
RETURNING id INTO product_id;
EXIT; -- exit loop
EXCEPTION WHEN unique_violation THEN
-- do nothing, go to the beginning of the loop
-- and check once more if article_code exists
END;
END LOOP;
-- do something with product_id
END;