Oracle PL/SQL Trigger to update another column - sql

I am trying to create a trigger that updates another table with PL/SQL and I am having some problems. (I have read this but doesn't help a lot).
Here is my situation, I have lets say 2 tables :
Customers Table
CustomerID number Primary Key, ItemsDelivered number
Items Table
CustomerID number, ItemID number, ItemDelivered Varchar(15)
Lets say that when somenone places an order we have a new record at Items table that looks like this:
| CustomerID | ItemID | ItemDelivered |
| 1 | 1 | False |
I want a trigger that will raise the ItemsDelivered counter whenever someone updates the ItemDelivered collumn to "True".
create or replace Trigger UpdateDelivered
After Update On Items For
Each Row
Declare
Counter Customers.ItemsDelivered%Type;
Begin
If (:Old.ItemDelivered ='False' And :New.ItemDelivered='True') Then
Select ItemsDelivered into Counter From Customers where CustomerdID =:New.CustomerID;
Update....
end if;
END;
Here is my problem, if only the ItemDelivered column is updated there is no New.CustomerID!
Is there any way to get the CustomerID of the row that have just updated?
(I have tried to join with inserted virtual table but I am getting an error that the table doesn't exists)

In a row-level trigger on an UPDATE, both :new.customerID and :old.customerID should be defined. And unless you're updating the CustomerID, the two will have the same value. Given that, it sounds like you want
create or replace Trigger UpdateDelivered
After Update On Items For
Each Row
Begin
If (:Old.ItemDelivered ='False' And :New.ItemDelivered='True') Then
Update Customers
set itemsDelivered = itemsDelivered + 1
where customerID = :new.customerID;
end if;
END;
That being said, however, storing this sort of counter and maintaining it with a trigger is generally a problematic way to design a data model. It violates basic normalization and it potentially leads to all sorts of race conditions. For example, if you code the trigger the way you were showing initially where you do a SELECT to get the original count and then do an update, you'll introduce bugs in a multi-user environment because someone else could also be in the process of marking an item delivered and neither transaction would see the other session's changes and your counter would get set to the wrong value. And even if you implement bug-free code, you've got to introduce a serialization mechanism (in this case the row-level lock on the CUSTOMERS table taken out by the UPDATE) that causes different sessions to have to wait on each other-- that is going to limit the scalability and performance of your application.
To demonstrate that the :old.customerID and the :new.customerID will both be defined and will both be equal
SQL> desc items
Name Null? Type
----------------------------------------- -------- ----------------------------
CUSTOMERID NUMBER
ITEMID NUMBER
ITEMDELIVERED VARCHAR2(10)
SQL> ed
Wrote file afiedt.buf
1 create or replace
2 trigger updateDelivered
3 after update on items
4 for each row
5 begin
6 if( :old.itemDelivered = 'False' and :new.itemDelivered = 'True' )
7 then
8 dbms_output.put_line( 'New CustoerID = ' || :new.customerID );
9 dbms_output.put_line( 'Old CustomerID = ' || :old.customerID );
10 end if;
11* end;
SQL> /
Trigger created.
SQL> select * from items;
CUSTOMERID ITEMID ITEMDELIVE
---------- ---------- ----------
1 1 False
SQL> update items
2 set itemDelivered = 'True'
3 where customerID = 1;
New CustoerID = 1
Old CustomerID = 1
1 row updated.

If you would like to store the item count in the database, I would recommend a pair of triggers. You would use an after row trigger to record the item number (perhaps in a table variable in your package) and an after statement trigger that will actually update the counter, calculating the items delivered directly from the base date. That is, by
select sum(itemsDelivered) from Customers where itemId = :itemId;
This way, you avoid the dangers of corrupting the counters, because you are always setting it to what it should be. It's probably a Good Idea to keep the derived data in a separate table.
We built our old system entirely on database triggers which updated data in separate "Derived" tables, and it worked very well. It had the advantage that all of our data manipulation could be performed by inserting, updating and deleting the database tables with no need to know the business rules. For instance, to put a student into a class, you just had to insert a row into the registration table; after your select statement, tuition, fees, financial aid, and everything else were already calculated.

Related

How to upsert values in Postgres catching exceptions while identifying modifications to a table

I have two tables in Postgres, t_product and t_product_modifications having the following respective structures:
t_product
product_code product_category owner
============ ================ =====
A home Jack
B office Daniel
C outdoor Jack
D home Susan
(the 'product_code' and 'product_category' are unique together and is a composite primary key.
There is also a NOT NULL constraint on the 'owner' column)
t_product_modifications
product_code last_modified_time
============ ==================
A 2020-04-07 16:10:30
B 2020-04-07 16:10:30
C 2020-04-07 16:10:30
D 2020-04-07 16:10:30
I basically need to do a bulk insert/update into the t_product table. And only if there has been a modification to a record, i should update the last_modified_time column in the t_product_modifications table. In addition to this, it is important that the entire bulk upsert should not fail if some other constraints have failed for certain records but rather it should just return a list of product_codes or an error log for which the upserts were not possible. (Also, for certain reasons I can't have both tables as one)
For example, let us say i am trying to do a bulk upsert for the following values into the t_product table:
1. ('A','home', 'Susan')
2. ('B','office', 'Daniel')
3. ('E','office', NULL)
4. ('F','home', NULL)
When trying to insert the above four values, this is what needs to happen
The first record should be updated successfully for ('A','home') primary key and the value Susan should be updated in the owner column. Since this record was an update to the t_product table, the last_modified_time for the respective product should be updated in the t_product_modifications table.
Ignore the second record. Basically it should not make any changes to the t_product_modifications table since there are no modifications being made to the t_product table
The third record should be a part of the output error log or exception since the owner field cannot be NULL
the fourth record should be a part of the output error log or exception since the owner field cannot be NULL
I will be executing this Postgres query from a Python script and wish to save all errors that happened during upsert without the entire query failing. I was unable to find a solution on StackOverflow that was efficient enough.
Trigger procedures are perfect for solving your problem. You need to create a procedure that will be executed when a record of the t_product table is updated and check if the values โ€‹โ€‹of the columns have changed, if true, then update the last_modified_time column from the t_product_modifications table.
Your Trigger:
CREATE FUNCTION update_product() RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'UPDATE') AND OLD!=NEW THEN
UPDATE t_product_modifications SET last_modified_time = NOW() WHERE product_code = New.product_code;
END IF;
RETURN NEW;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER update_product_modified_time BEFORE UPDATE ON t_product FOR EACH ROW EXECUTE PROCEDURE update_product();
Demo in DBfiddle
-- This will update the last_modified_time in the t_product_modifications table
UPDATE t_product SET product_category = 'home', owner = 'Susan' WHERE product_code = 'A';
-- Nothing will happen
UPDATE t_product SET product_category = 'office', owner = 'Daniel' WHERE product_code = 'B';

table mutating in oracle

I have a table, participated, which has a trigger that returns the total damage amount for a driver id when a new record is inserted into the table.
create or replace trigger display
after insert on participated
for each row
declare
t_id integer;
total integer;
begin
select driver_id, sum(damage_amount)
into t_id, total
from participated
where :new.driver_id = driver_id
group by driver_id;
dbms_output.put_line(' total amount ' || total' driver id' || t_id);
end;
/
The trigger is created, but it returns this error:
ORA-04091: table SQL_HSATRWHKNJHKDFMGWCUISUEEE.PARTICIPATED is mutating,
trigger/function may not see it ORA-06512: at
"SQL_HSATRWHKNJHKDFMGWCUISUEEE.DISPLAY", line 5
ORA-06512: at "SYS.DBMS_SQL", line 1721
Please help with this trigger.
As commented above, this feels like a code-smell. A row level trigger cannot change the table being changed, since that would fire another trigger, which would end up in an endless loop of calling triggers.
Changing this to a statement level trigger is not doing the same thing.
Preferred solution:
1) put this to the application logic, and calculate after row has been inserted - this is trivial as #kfinity mentioned.
2) earmark newly inserted rows and use a statement level trigger. For example, have an extra column, say is_new default 1 - therefore all new inserted rows will have this flag. Then use a statement level trigger suggested by #hbourchi to calculate and update all drivers that is_new is 1, and then set this flag back to zero
3) the logic in 2) can be implemented using pl/sql and in-memory pl/sql tables. The pl/sql table collects affected driver ids using a row level trigger, and then updates the totals of the selected drivers. Tom Kyte has lots of examples on this, this is not a rocket science, however if you lack of PL/SQL knowledge, then this is probably not your way. (For the note: PL/SQL is super important when using Oracle - without that Oracle is just an expensive Excel sheet like any other db. Worth of using it.)
4) probably, you shall revise your data model - and the problem solves itself. The participated table shows multiple rows per driver id. You want to calculate one total row per driver id - why would you put that summary to the same table? Simply add a new table, participated_total which has driver_id and damaged_amount fields. Then feel free to insert or update that from your trigger as you planned originally!
5) in fact, you can calculate these totals on the fly (depending on the number of rows and your performance expectations), by simply crafting the right SQL when querying - this way no need to store the pre-calculated totals.
6) but if you wish Oracle to store these totals for you, you can do 5) and use materialized views. These are in-fact tables, which are updated and maintained automatically by Oracle, so your actual query at 5) does not need to calculate anything on the fly but can get the automatically pre-calculated data from the materialized view.
How about just no triggers at all ?
SQL> create table participated (
2 incident_id int primary key,
3 driver_id int not null,
4 damage_amount int not null
5 );
Table created.
SQL>
SQL> insert into participated
2 select rownum, mod(rownum,10), dbms_random.value(1000,2000)
3 from dual
4 connect by level <= 200;
200 rows created.
SQL> create materialized view log on participated with rowid, (driver_id,damage_amount), sequence including new values;
Materialized view log created.
SQL> create materialized view driver_tot
2 refresh fast on commit
3 with primary key
4 as
5 select driver_id,
6 sum(damage_amount) tot,
7 count(*) cnt
8 from participated
9 group by driver_id;
Materialized view created.
SQL> select driver_id, tot
2 from driver_tot
3 order by 1;
DRIVER_ID TOT
---------- ----------
0 32808
1 29847
2 28585
3 29714
4 32148
5 30491 <====
6 29258
7 32103
8 30131
9 26834
10 rows selected.
SQL>
SQL> insert into participated values (9999,5,1234);
1 row created.
SQL> commit;
Commit complete.
SQL>
SQL> select driver_id, tot
2 from driver_tot
3 order by 1;
DRIVER_ID TOT
---------- ----------
0 32808
1 29847
2 28585
3 29714
4 32148
5 31725 <====
6 29258
7 32103
8 30131
9 26834
10 rows selected.
SQL>
SQL>
You didn't post your trigger definition but normally you can not query a table, inside trigger when updating records in the same table.
Try using after update trigger. it might work in your case. Something like this:
CREATE OR REPLACE TRIGGER my_trigger AFTER UPDATE ON my_table FOR EACH ROW
DECLARE
...
BEGIN
...
END;
another option would be to make your trigger AUTONOMOUS_TRANSACTION:
CREATE OR REPLACE TRIGGER my_trigger BEFORE INSERT ON my_table FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
...
END;
However read this before choosing for this option:
https://docs.oracle.com/cd/B14117_01/appdev.101/b10807/13_elems002.htm

Create a trigger for a logbook in postgresql and get data from 2 distint tables after a delete

Hello my problem is the next I have two tables of which are called connection and this has the following columns
boxnum(pk) | date | partnum
boxnum is the pk
then there is the market table that has the following fields
boxnumm(PK)(FK) | entrydate | exitdate | existence(boolean)
and what I want to do is that every time a record is deleted of the market
that is registered in the table called logbook
Logbook table
ID | boxnum | entrydatem | exitdatem | partnum
this is easy using a trigger that is thrown by a delete
but the problem I have is that I want the connection boxnum to be linked to the market boxnum
so I can get the partnum I had at that time the record removed and what I have is this
CREATE OR REPLACE FUNCTION insertar_trigger() RETURNS TRIGGER AS $insertar$
DECLARE BEGIN
INSERT INTO public.logbook (boxnum, entrydatem, exitdatem, partnum) SELECT old.boxnumm, old.entrydate, old.exitdate, partnum
FROM public.market me INNER JOIN public.connection cp ON me.boxnumm = cp.boxnum
where cp.boxnum = old.boxnumm;
RETURN NULL;
END;
$insertar$ LANGUAGE plpgsql;
CREATE TRIGGER insertar_bitacora BEFORE DELETE
ON mercado FOR EACH ROW
EXECUTE PROCEDURE insertar_trigger();
but as you can see I use the before DELETE to do this works very well the trigger saves the data I want but in the market table the record is never erased
appears as deleted but if I show the fields in this table again appear those that were apparently deleted, then I changed the before for the after but this made it impossible to fulfill the part of the where, I do not know how to fix it, if you could help me I would appreciate it.
Quote from the manual
Row-level triggers fired BEFORE can return null to signal the trigger manager to skip the rest of the operation for this row (i.e., subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does not occur for this row) [...]Note that NEW is null in DELETE triggers, so returning that is usually not sensible. The usual idiom in DELETE triggers is to return OLD
(emphasis mine)
You are returning NULL from your BEFORE trigger. So your trigger function insert the row into the logbook table, but the original DELETE is cancelled.
If you change RETURN NULL; to RETURN OLD; it should work.

PostgreSQL INSERT or UPDATE values given a SELECT result after a trigger has been hit

Here is my structure (with values):
user_eval_history table
user_eval_id | user_id | is_good_eval
--------------+---------+--------------
1 | 1 | t
2 | 1 | t
3 | 1 | f
4 | 2 | t
user_metrics table
user_metrics_id | user_id | nb_good_eval | nb_bad_eval
-----------------+---------+--------------+-------------
1 | 1 | 2 | 1
2 | 2 | 1 | 0
For access time (performance) reasons I want to avoid recomputing user evaluation from the history again and again.
I would like to store/update the sums of evaluations (for a given user) everytime a new evaluation is given to the user (meaning everytime there is an INSERT in the user_eval_history table I want to update the user_metrics table for the corresponding user_id).
I feel like I can achieve this with a trigger and a stored procedure but I'm not able to find the correct syntax for this.
I think I need to do what follows:
1. Create a trigger on user metrics:
CREATE TRIGGER update_user_metrics_trigger AFTER INSERT
ON user_eval_history
FOR EACH ROW
EXECUTE PROCEDURE update_user_metrics('user_id');
2. Create a stored procedure update_user_metrics that
2.1 Computes the metrics from the user_eval_history table for user_id
SELECT
user_id,
SUM( CASE WHEN is_good_eval='t' THEN 1 ELSE 0) as nb_good_eval,
SUM( CASE WHEN is_good_eval='f' THEN 1 ELSE 0) as nb_bad_eval
FROM user_eval_history
WHERE user_id = 'user_id' -- don't know the syntax here
2.2.1 Creates the entry into user_metrics if not already existing
INSERT INTO user_metrics
(user_id, nb_good_eval, nb_bad_eval) VALUES
(user_id, nb_good_eval, nb_bad_eval) -- Syntax?????
2.2.2 Updates the user_metrics entry if already existing
UPDATE user_metrics SET
(user_id, nb_good_eval, nb_bad_eval) = (user_id, nb_good_eval, nb_bad_eval)
I think I'm close to what is needed but don't know how to achieve this. Especially I don't know about the syntax.
Any idea?
Note: Please, no "RTFM" answers, I looked up for hours and didn't find anything but trivial examples.
First, revisit the assumption that maintaining an always current materialized view is a significant performance gain. You add a lot of overhead and make writes to user_eval_history a lot more expensive. The approach only makes sense if writes are rare while reads are more common. Else, consider a VIEW instead, which is more expensive for reads, but always current. With appropriate indexes on user_eval_history this may be cheaper overall.
Next, consider an actual MATERIALIZED VIEW (Postgres 9.3+) for user_metrics instead of keeping it up to date manually, especially if write operations to user_eval_history are very rare. The tricky part is when to refresh the MV.
Your approach makes sense if you are somewhere in between, user_eval_history has a non-trivial size and you need user_metrics to reflect the current state exactly and close to real-time.
Still on board? OK. First you need to define exactly what's allowed / possible and what's not. Can rows in user_eval_history be deleted? Can the last row of a user in user_eval_history be deleted? Probably yes, even if you would answer "No". Can rows in user_eval_history be updated? Can user_id be changed? Can is_good_eval be changed? If yes, you need to prepare for each of these cases.
Assuming the trivial case: INSERT only. No UPDATE, no DELETE. There is still the possible race condition you have been discussing with #sn00k4h. You found an answer to that, but that's really for INSERT or SELECT, while you have a classical UPSERT problem: INSERT or UPDATE:
FOR UPDATE like you considered in the comments is not the silver bullet here. UPDATE user_metrics ... locks the row it updates anyway. The problematic case is when two INSERTs try to create a row for a new user_id concurrently. You cannot lock key values that are not present in the unique index, yet, in Postgres. FOR UPDATE can't help. You need to prepare for a possible unique violation and retry as discussed in these linked answers:
Upsert with a transaction
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Code
Assuming these table definitions:
CREATE TABLE user_eval_history (
user_eval_id serial PRIMARY KEY
, user_id int NOT NULL
, is_good_eval boolean NOT NULL
);
CREATE TABLE user_metrics (
user_metrics_id -- seems useless
, user_id int PRIMARY KEY
, nb_good_eval int NOT NULL DEFAULT 0
, nb_bad_eval int NOT NULL DEFAULT 0
);
First, you need a trigger function before you can create a trigger.
CREATE OR REPLACE FUNCTION trg_user_eval_history_upaft()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
LOOP
IF NEW.is_good_eval THEN
UPDATE user_metrics
SET nb_good_eval = nb_good_eval + 1
WHERE user_id = NEW.user_id;
ELSE
UPDATE user_metrics
SET nb_bad_eval = nb_bad_eval + 1
WHERE user_id = NEW.user_id;
END IF;
EXIT WHEN FOUND;
BEGIN -- enter block with exception handling
IF NEW.is_good_eval THEN
INSERT INTO user_metrics (user_id, nb_good_eval)
VALUES (NEW.user_id, 1);
ELSE
INSERT INTO user_metrics (user_id, nb_bad_eval)
VALUES (NEW.user_id, 1);
END IF;
RETURN NULL; -- returns from function, NULL for AFTER trigger
EXCEPTION WHEN UNIQUE_VIOLATION THEN -- user_metrics.user_id is UNIQUE
RAISE NOTICE 'It actually happened!'; -- hardly ever happens
END;
END LOOP;
RETURN NULL; -- NULL for AFTER trigger
END
$func$;
In particular, you don't pass user_id as parameter to the trigger function. The special variable NEW holds values of the triggering row automatically. Details in the manual here.
Trigger:
CREATE TRIGGER upaft_update_user_metrics
AFTER INSERT ON user_eval_history
FOR EACH ROW EXECUTE PROCEDURE trg_user_eval_history_upaft();

How can I prevent multiple records of a type are not allowed in a many-to-many relationship?

I have four tables.
PERSON DELIVERY_MAPPING GENERATION_SYSTEM DELIVERY_METHOD
------ ---------------- ----------------- ---------------
ID PERSON_ID ID ID
NAME GENERATION_SYSTEM_ID NAME NAME
DELIVERY_METHOD_ID IS_SPECIAL
Example data:
PERSON DELIVERY_MAPPING GENERATION_SYSTEM DELIVERY_METHOD
------ ---------------- ----------------- ---------------
1. TOM 1 1 1. COLOR PRINTER 1 1. EMAIL N
2. DICK 1 2 2. BW PRINTER 1 2. POST N
3. HARRY 2 3 3. HANDWRITTEN 3 3. PIGEONS Y
A DELIVERY_METHOD contains ways to deliver new letters โ€” EMAIL, POST, PIGEON. The IS_SPECIAL column marks a record as a means of a special delivery. It is indicated by a simple value of Y or N. Only PIGEON is a special delivery method i.e. Y, the others are not i.e. N.
The GENERATION_SYSTEM has the information that will finally print the letter. Example values are COLOR PRINTER and DOT MATRIX PRINTER. Each GENERATION_SYSTEM will always be delivered using one of the DELIVERY_METHODs. There's a foreign key-between GENERATION_SYSTEM and DELIVERY_METHOD.
Now, each PERSON can have his letters generated by different GENERATION_SYSTEMs and since, it is a many-to-many relation, we have the DELIVERY_MAPPING table and that's that's why we have foreign key's on both ends.
So far, so good.
I need to ensure that it if a person has his letters generated by a system that uses a special delivery method then he cannot be allowed to have multiple generation systems in the mappings list. For example, Dick can't have his letters generated using the colour printer because he already gets all his handwritten letters delivery by a pigeon (which is a marked a special delivery method).
How would I accomplish such a constraint? I tried doing it with a before-insert-or-update trigger on the DELIVERY_MAPPING table but that causes the mutating trigger problem when updating.
Can is normalise this scenario even more? Maybe it is just that i haven't normalised my table properly.
Either way, I'd love to hear your take on this issue. I hope I've been verbose enough (...and if you can propose a better title for this post, that would be great)
For a complicated constraint like this, I think you need to use triggers. I don't think the mutating table problem is an issue, because you are either going to do an update or do nothing.
The only table you need to worry about is Delivery_Mapping. Before allowing a change to this table, you need to run a query on the existing table to get the number of specials and gs's:
select SUM(case when dme.is_special = 'Y' then 1 else 0 end) as NumSpecial,
count(distinct gs.id) as NumGS,
MIN(gs.id) as GSID
from delivery_mapping dm join
generation_system gs
on dm.generation_system_id = gs.id join
delivery_method dme
on gs.delivery_method_id = dme.id
where dm.person_id = PERSONID
With this information, you can check if the insert/update can proceed. I think you need to
check the conditions:
If NumSpecial = 0 and the new delivery method is not special, then proceed.
If NumSpecial = 0 and NumGS = 0, then proceed.
Otherwise fail.
The logic is a bit more complicated for updates.
By the way, I prefer to wrap updates/inserts/deletes in stored procedures, so logic like this doesn't get hidden in triggers. I find that debugging and maintaining procedures is much easier than dealing with triggers, which may be possibly cascading.
I'd avoid triggers on the base tables for this unless you can guarantee serialization.
you could use an API (best way) as Gordon says (again, be sure to serialize) or if that isn't suitable, use a materialized view (we don't need to serialize here, as the check is done on commit):
SQL> create materialized view log on person with rowid, primary key including new values;
Materialized view log created.
SQL> create materialized view log on delivery_mapping with rowid, primary key including new values;
Materialized view log created.
SQL> create materialized view log on generation_system with rowid, primary key (delivery_method_id) including new values;
Materialized view log created.
SQL> create materialized view log on delivery_method with rowid, primary key (is_special) including new values;
Materialized view log created.
we create a materialized view to show the counts of special + non special links for each user:
SQL> create materialized view check_del_method
2 refresh fast on commit
3 with primary key
4 as
5 select pers.id, count(case del_meth.is_special when 'Y' then 1 end) special_count,
6 count(case del_meth.is_special when 'N' then 1 end) non_special_count
7 from person pers
8 inner join delivery_mapping del_map
9 on pers.id = del_map.person_id
10 inner join generation_system gen
11 on gen.id = del_map.generation_system_id
12 inner join delivery_method del_meth
13 on del_meth.id = gen.delivery_method_id
14 group by pers.id;
Materialized view created.
the MView is defined as fast refresh on commit, so the modified rows are rebuilt on commit. now the rule is that if special+non special counts are non-zero, that's an error condition.
SQL> create trigger check_del_method_aiu
2 after insert or update on check_del_method
3 for each row
4 declare
5 begin
6 if (:new.special_count > 0 and :new.non_special_count > 0)
7 then
8 raise_application_error(-20000, 'Cannot have a mix of special and non special delivery methods for user ' || :new.id);
9 end if;
11 end;
12 /
Trigger created.
SQL> set serverout on
SQL> insert into delivery_mapping values (1, 3);
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-20000: Cannot have a mix of special and non special delivery methods for
user 1
ORA-06512: at "TEST.CHECK_DEL_METHOD_AIU", line 6
ORA-04088: error during execution of trigger 'TEST.CHECK_DEL_METHOD_AIU'
CREATE MATERIALIZED VIEW special_queues_mv
NOLOGGING
CACHE
BUILD IMMEDIATE
REFRESH ON COMMIT
ENABLE QUERY REWRITE
AS SELECT dmap.person_id
, SUM(DECODE(dmet.is_special, 'Y', 1, 0)) AS special_queues
, SUM(DECODE(dmet.is_special, 'N', 1, 0)) AS regular_queues
FROM delivery_mapping dmap
, generation_system gsys
, delivery_method dmet
WHERE dmap.generation_system_id = gsys.id
AND gsys.delevery_method_id = dmet.id
GROUP
BY dmap.person_id
/
ALTER MATERIALIZED VIEW special_queues_mv
ADD ( CONSTRAINT special_queues_mv_chk1 CHECK ((special_queues = 1 AND regular_queues = 0) OR ( regular_queues > 0 AND special_queues = 0 ) ) ENABLE VALIDATE)
/
That's how I did it. DazzaL's answer gave me a hint on how to do it.