Bypass mutating trigger in Oracle - sql

I have two tables; on update of table 1 (cars bought), I need to update the other table with the sum of total cars bought by a specific customer.
When I try it, I encountered a mutating trigger. I have tried to convert it to a compound trigger, however I am encountered a vast number of errors, null index etc.
detail_table:
+-----------+---------+--------+------+------+
| customore | car | number | cost | sold |
+-----------+---------+--------+------+------+
| josh | mustang | 2 | 5 | y |
| josh | ford | 3 | 2 | y |
| josh | tesla | 1 | 3 | n | -->to update to y
| john | chevy | 4 | 1 | y |
| john | chevy | 5 | 2 | y |
+-----------+---------+--------+------+------+
On update of sold from n to y, the rows must roll up and sum into this summary table
summary_table
+----------+------------+------------+
| customer | total cars | total cost |
+----------+------------+------------+
| josh | 5 | 7 | -- > before update on detail
+----------+------------+------------+
+----------+------------+------------+
| customer | total cars | total cost |
+----------+------------+------------+
| josh | 6 | 10 | -- > after update on detail
+----------+------------+------------+
In the end when the user updates n to y for josh.total cars supposed to become 1 and total cost 10
trigger code
CREATE OR REPLACE TRIGGER update_summary_table
AFTER UPDATE ON detail_table FOR EACH ROW
referencing old as old new as new
begin
for i in (select sum(number) s_car, sum(cost) s_cost
from detail_table
where customer = :new.customer
and sold = 'y') loop
update summary_table
set total_cars = i.s_car,
total_cost = i.s_cost
where customer = :new.customer;
end loop;
end;
end update_summary_table;

Why would you use a for loop for this? Just use a single update:
update summary_table
set total_cars = coalesce(total_cars, 0) + :new.number - :old.number,
total_cost = coalesce(total_cost, 0) + :new.cost - :old.cost
where customer = :new.customer;
This does an incremental change to the summary rather than entirely recalculating the row.
The use of coalesce() is just in case no default value is given for new rows.

Related

Returning singular row/value from joined table date based on closest date

I have a Production Table and a Standing Data table. The relationship of Production to Standing Data is actually Many-To-Many which is different to how this relationship is usually represented (Many-to-One).
The standing data table holds a list of tasks and the score each task is worth. Tasks can appear multiple times with different "ValidFrom" dates for changing the score at different points in time. What I am trying to do is query the Production Table so that the TaskID is looked up in the table and uses the date it was logged to check what score it should return.
Here's an example of how I want the data to look:
Production Table:
+----------+------------+-------+-----------+--------+-------+
| RecordID | Date | EmpID | Reference | TaskID | Score |
+----------+------------+-------+-----------+--------+-------+
| 1 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 2 | 27/02/2020 | 1 | 123 | 1 | 1.5 |
| 3 | 30/02/2020 | 1 | 123 | 1 | 2 |
| 4 | 31/02/2020 | 1 | 123 | 1 | 2 |
+----------+------------+-------+-----------+--------+-------+
Standing Data
+----------+--------+----------------+-------+
| RecordID | TaskID | DateActiveFrom | Score |
+----------+--------+----------------+-------+
| 1 | 1 | 01/02/2020 | 1.5 |
| 2 | 1 | 28/02/2020 | 2 |
+----------+--------+----------------+-------+
I have tried the below code but unfortunately due to multiple records meeting the criteria, the production data duplicates with two different scores per record:
SELECT p.[RecordID],
p.[Date],
p.[EmpID],
p.[Reference],
p.[TaskID],
s.[Score]
FROM ProductionTable as p
LEFT JOIN StandingDataTable as s
ON s.[TaskID] = p.[TaskID]
AND s.[DateActiveFrom] <= p.[Date];
What is the correct way to return the correct and singular/scalar Score value for this record based on the date?
You can use apply :
SELECT p.[RecordID], p.[Date], p.[EmpID], p.[Reference], p.[TaskID], s.[Score]
FROM ProductionTable as p OUTER APPLY
( SELECT TOP (1) s.[Score]
FROM StandingDataTable AS s
WHERE s.[TaskID] = p.[TaskID] AND
s.[DateActiveFrom] <= p.[Date]
ORDER BY S.DateActiveFrom DESC
) s;
You might want score basis on Record Level if so, change the where clause in apply.

DAX calculated column using IF to evaluate value from another table

I'm using SSAS Tabular, where I have two tables with a 1:n relationship, Position and Transaction. There is an active 1:n relationship on PositionID
Position
+------------+------+--------+
| PositionID | Type | Source |
+------------+------+--------+
| C1000 | A | 1 |
+------------+------+--------+
| C1200 | B | 2 |
+------------+------+--------+
| C1400 | C | 1 |
+------------+------+--------+
Transaction
+---------+------------+--------+
| TransID | PositionID | Amount |
+---------+------------+--------+
| 1 | C1000 | 150 |
+---------+------------+--------+
| 2 | C1000 | 200 |
+---------+------------+--------+
| 3 | C1400 | 350 |
+---------+------------+--------+
I want to create a calculated column on table Transaction which has the following logic:
IF Position[Type]="A" AND Position[Source]<>1 THEN Transaction[Amount] * -1 ELSE Transaction[Amount] * -1
I've tried using the RELATED function in DAX but its not detecting the related Position table; when I type it manually it returns the error "cannot find table":
=IF(RELATED(Position[Type]) = 'A' && RELATED(Position[Source]) <> 1;-1*Transaction[Amount];Transaction[Amount])
I have no duplicates on the table which is on the 1 side of the 1:n relationship. Should I try a different DAX function?
I tried using LOOKUPVALUE, and so far it looks good.
=IF(LOOKUPVALUE(Position[Type];Position[PositionID];Transaction[PositionID])="A"&&LOOKUPVALUE(Position[Source];Position[PositionID];Transaction[PositionID])<>"1";-1*Transaction[Amount];Transaction[Amount])

Mutating error on an AFTER insert trigger

CREATE OR REPLACE TRIGGER TRG_INVOICE
AFTER INSERT
ON INVOICE
FOR EACH ROW
DECLARE
V_SERVICE_COST FLOAT;
V_SPARE_PART_COST FLOAT;
V_TOTAL_COST FLOAT;
V_INVOICE_DATE DATE;
V_DUEDATE DATE;
V_REQ_ID INVOICE.SERVICE_REQ_ID%TYPE;
V_INV_ID INVOICE.INVOICE_ID%TYPE;
BEGIN
V_REQ_ID := :NEW.SERVICE_REQ_ID;
V_INV_ID := :NEW.INVOICE_ID;
SELECT SUM(S.SERVICE_COST) INTO V_SERVICE_COST
FROM INVOICE I, SERVICE_REQUEST SR, SERVICE S, SERVICE_REQUEST_TYPE SRT
WHERE I.SERVICE_REQ_ID = SR.SERVICE_REQ_ID
AND SR.SERVICE_REQ_ID = SRT.SERVICE_REQ_ID
AND SRT.SERVICE_ID = S.SERVICE_ID
AND I.SERVICE_REQ_ID = V_REQ_ID;
SELECT SUM(SP.PRICE) INTO V_SPARE_PART_COST
FROM INVOICE I, SERVICE_REQUEST SR, SERVICE S, SERVICE_REQUEST_TYPE SRT,
SPARE_PART_SERVICE SRP,
SPARE_PART SP
WHERE I.SERVICE_REQ_ID = SR.SERVICE_REQ_ID
AND SR.SERVICE_REQ_ID = SRT.SERVICE_REQ_ID
AND SRT.SERVICE_ID = S.SERVICE_ID
AND S.SERVICE_ID = SRP.SERVICE_ID
AND SRP.SPARE_PART_ID = SP.SPARE_PART_ID
AND I.SERVICE_REQ_ID = V_REQ_ID;
V_TOTAL_COST := V_SERVICE_COST + V_SPARE_PART_COST;
SELECT SYSDATE INTO V_INVOICE_DATE FROM DUAL;
SELECT ADD_MONTHS(SYSDATE, 1) INTO V_DUEDATE FROM DUAL;
UPDATE INVOICE
SET COST_SERVICE_REQ = V_SERVICE_COST, COST_SPARE_PART =
V_SPARE_PART_COST,
TOTAL_BALANCE = V_TOTAL_COST, PAYMENT_DUEDATE = V_DUEDATE, INVOICE_DATE =
V_INVOICE_DATE
WHERE INVOICE_ID = V_INV_ID;
END;
I'm trying to calculate some columns after the user inserts a row.
Using the service_request_id I want to calculate the service/parts/total cost. Also, I would like to generate the creation and due dates. But, I keep getting
INVOICE is mutating, trigger/function may not see it
Not sure how the table is mutating after the insert statement.
Not sure how the table is mutating after the insert statement.
Imagine a simple table:
create table x(
x int,
my_sum int
);
and an AFTER INSERT FOR EACH ROW trigger, similar to yours, which calculates a sum of all values in the table and updates my_sum column.
Now imagine this insert statement:
insert into x( x )
select 1 as x from dual
connect by level <= 1000;
This single statement basically inserts 1000 records, each one with 1 value, see this demo: http://sqlfiddle.com/#!4/0f211/7
Since in SQL each individual statement must be ATOMIC (more on this here: Statement-Level Read Consistency, Oracle is free to perform this query in any way as long as the final result is correct (consistent). It can save records in the order of execution, maybe in reverse order, it can divide the batch into 10 threads and do it in parallel.
Since the trigger is fired individually after inserting each row, and it cannot know in advance the "final" result, then considering the above all the below results are possible depending on "internal" method choosed by Oracle to execute this query. As you see, these result do not meet the definition of consistency. And Oracle prevents this issuing mutating table error.
In other words - your assumption are bad and your design is flawed, you need to change it.
| X | MY_SUM |
|---|--------|
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
...
...
or maybe :
| X | MY_SUM |
|---|--------|
| 1 | 1000 |
| 1 | 1000 |
| 1 | 1000 |
| 1 | 1000 |
| 1 | 1000 |
| 1 | 1000 |
| 1 | 1000 |
...
or maybe:
| X | MY_SUM |
|---|--------|
| 1 | 4 |
| 1 | 8 |
| 1 | 12 |
| 1 | 16 |
| 1 | 20 |
| 1 | 24 |
| 1 | 28 |
...
...

Count numbers in single row - SQL

is it possible to return count of values in single row?
For example this is test table and I want to count of daily_typing_pages
SQL> SELECT * FROM employee_tbl;
+------+------+------------+--------------------+
| id | name | work_date | daily_typing_pages |
+------+------+------------+--------------------+
| 1 | John | 2007-01-24 | 250 |
| 2 | Ram | 2007-05-27 | 220 |
| 3 | Jack | 2007-05-06 | 170 |
| 3 | Jack | 2007-04-06 | 100 |
| 4 | Jill | 2007-04-06 | 220 |
| 5 | Zara | 2007-06-06 | 300 |
| 5 | Zara | 2007-02-06 | 350 |
+------+------+------------+--------------------+
Result of this count should be : 1610 how ever if I simply count() AROUND it return:
SQL>SELECT COUNT(daily_typing_pages) FROM employee_tbl ;
+---------------------------+
| COUNT(daily_typing_pages) |
+---------------------------+
| 7 |
+---------------------------+
1 row in set (0.01 sec)
So it return number of rows instead of count single row.
Is there some way how to do things like I want without using external programming language which will count it for me?
Thanks
You want SUM instead of COUNT. COUNT merely counts the number of records, you want them summed.
You didn't mention your DBMS, but see for example, for sql server this
Did you mean you want to summarize alle numbers of daily_typing_pages ?
So you can use sum(daily_typing_pages):
SELECT SUM(daily_typing_pages) FROM employee_tbl

Pick a record based on a given value in postgres

I have a table in postgres like below,
alg_campaignid | alg_score | cp | sum
----------------+-----------+---------+----------
9829 | 30.44056 | 12.4000 | 12.4000
9880 | 29.59280 | 12.0600 | 24.4600
9882 | 29.59280 | 12.0600 | 36.5200
9827 | 29.27504 | 11.9300 | 48.4500
9821 | 29.14840 | 11.8800 | 60.3300
9881 | 29.14840 | 11.8800 | 72.2100
9883 | 29.14840 | 11.8800 | 84.0900
10026 | 28.79280 | 11.7300 | 95.8200
10680 | 10.31504 | 4.1800 | 100.0000
From which i have to select a record based on randomly generated number from 0 to 100.i.e first record should be returned if random number picked is between 0 and 12.4000,second if rendom is between 12.4000 and 24.4600,and likewise last if random no is between 95.8200 and 100.0000.
For Example
if the random number picked is 8 then the first record should be returned
or
if the random number picked is 48 then the fourth record should be returned
Is it possible to do this postgres if so kindly recommend a solution for this..
Yes, you can do this in Postgres. If you want to generate the number in the database:
with r as (
select random() * 100 as r
)
select t.*
from table t cross join r
where t.sum <= r.r
order by t.sum desc
limit 1;