Oracle sql split up rows to fill maxquantity with reference articles - sql

this is an extended question to this already answered Thread
say i have a list of articles, which i want to split to fill maxvalues including addon-articles (no. 7), which refer to other positions:
id | ref | name | quantity | maxquantity
1 | null | name_a| 3 | 5
2 | null | name_a| 1 | 5
3 | null | name_a| 3 | 5
4 | null | name_a| 5 | 5
5 | null | name_b| 7 | 4
6 | null | name_b| 2 | 4
7 | 5 | add_1 | 14 | null
i want to create packages grouped by name, filled up to the maxvalues, keeping the reference-relationship and the ratio of referenced-articles to referencing-articles to get the following results:
1 | null | name_a| 3 | 5 | name_a_part1 | 3
2 | null | name_a| 1 | 5 | name_a_part1 | 1
3 | null | name_a| 3 | 5 | name_a_part1 | 1
^- sum() = maxquantity
3 | null | name_a| 3 | 5 | name_a_part2 | 2
4 | null | name_a| 5 | 5 | name_a_part2 | 3
^- sum() = maxquantity
4 | null | name_a| 5 | 5 | name_a_part3 | 2
^- sum() = maxquantity or the rest of name_a
5 | null | name_b| 7 | 4 | name_b_part1 | 4
^- sum() = maxquantity
5 | null | name_b| 7 | 4 | name_b_part2 | 3
6 | null | name_b| 2 | 4 | name_b_part2 | 1
^- sum() = maxquantity
6 | null | name_b| 2 | 4 | name_b_part3 | 1
^- sum() = maxquantity or the rest of name_b
7 | 5 | add_1| 14| null | name_b_part1 | 8
7 | 5 | add_1| 14| null | name_b_part2 | 6
ratio of pos5 to pos7 is 1:2
the name or the number of the final bins should match between referenced-articles and referencing-articles

I managed to get solve this issue.
create the table via
CREATE TABLE articles (pos, ref_pos, article, quantity, maxquantity ) AS
SELECT 0, NULL, 'prod1', 3, 6 FROM DUAL UNION ALL
SELECT 1, NULL, 'prod1', 3, 6 FROM DUAL UNION ALL
SELECT 2, NULL, 'prod1', 8, 6 FROM DUAL UNION ALL
SELECT 7, 2, 'addon_for_pos2', 16, NULL FROM DUAL
and this sql will get the correct Results:
WITH split_bins (pos, ref_pos, article, quantity, maxquantity, bin_tag, bin_tag2, effective_quantity, prev_quantity,effective_name, ratio) AS (
-- ################### the first static iteration
SELECT pos,
ref_pos,
article,
quantity,
-- ################### calculate the max-quantity
COALESCE(
maxquantity, CONNECT_BY_ROOT maxquantity * quantity / CONNECT_BY_ROOT quantity
) AS maxquantity,
-- ################### calculate the bin_tag for grouping
FLOOR(
COALESCE(
SUM(quantity) OVER (
PARTITION BY article
ORDER BY pos
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
),
0
)
/ COALESCE(
maxquantity, CONNECT_BY_ROOT maxquantity * quantity / CONNECT_BY_ROOT quantity
)
) + 1 as bin_tag,
-- ################### calculate the bin_tag for grouping supplements to correct bin
FLOOR(
COALESCE(
SUM(quantity) OVER (
PARTITION BY article, pos
ORDER BY pos
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
),
0
)
/ COALESCE(
maxquantity, CONNECT_BY_ROOT maxquantity * quantity / CONNECT_BY_ROOT quantity
)
) + 1 as bin_tag2,
-- ################### calculate the effective quantity
LEAST(
COALESCE(
maxquantity, CONNECT_BY_ROOT maxquantity * quantity / CONNECT_BY_ROOT quantity
)
- MOD(
COALESCE(
SUM(quantity) OVER (
PARTITION BY article
ORDER BY pos
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
),
0
),
COALESCE(
maxquantity, CONNECT_BY_ROOT maxquantity * quantity / CONNECT_BY_ROOT quantity
)
),
quantity
) AS effective_quantity,
-- ################### previously used quantity (start with zero)
0 AS prev_quantity,
-- ################### propagate the referenced article to the referencing articles
CONNECT_BY_ROOT article AS effective_name,
-- ################### calculate the ratio of main articles and addons (just dev)
quantity / CONNECT_BY_ROOT quantity AS ratio
FROM
articles START WITH ref_pos IS NULL CONNECT BY PRIOR pos = ref_pos
-- ################### the 2nd to n iteration
UNION ALL
--(pos, ref_pos, article, quantity, maxquantity, bin_tag, effective_quantity, prev_quantity,effective_name, ratio)
SELECT pos,
ref_pos,
article,
quantity,
maxquantity,
-- ################### increase the identifier
bin_tag + 1 as bin_tag,
bin_tag2 + 1 as bin_tag2,
-- ################### calculate the current effective_quantity
LEAST(
quantity - prev_quantity - effective_quantity,
maxquantity
) as effective_quantity,
-- ################### calculate the prev_quantity for next iteration
prev_quantity + effective_quantity as prev_quantity,
effective_name,
ratio
FROM split_bins
WHERE prev_quantity + effective_quantity < quantity
)
-- ################### final select data from with-clause
SELECT pos, ref_pos, article, quantity, maxquantity, bin_tag, bin_tag2,effective_quantity, prev_quantity,effective_name, ratio,effective_name||'_limit_'||connect_by_root bin_tag as id
FROM split_bins START WITH ref_pos IS NULL CONNECT BY PRIOR pos = ref_pos and PRIOR bin_tag2=bin_tag2
order by pos, bin_tag;
fiddle

Related

Oracle sql split up rows to fill maxquantity

say i have a list of articles, which i want to split to fill maxvalues:
id | name | quantity | maxquantity
1 | name_a| 3 | 5
2 | name_a| 1 | 5
3 | name_a| 3 | 5
4 | name_a| 5 | 5
5 | name_b| 7 | 4
6 | name_b| 2 | 4
i want to create packages grouped by name, filled up to the maxvalues to get the following results:
id | name | quantity | maxquantity | tag | effective_quantity
1 | name_a| 3 | 5 | name_a_part1 | 3
2 | name_a| 1 | 5 | name_a_part1 | 1
3 | name_a| 3 | 5 | name_a_part1 | 1
^- sum() = maxquantity
3 | name_a| 3 | 5 | name_a_part2 | 2
4 | name_a| 5 | 5 | name_a_part2 | 3
^- sum() = maxquantity
4 | name_a| 5 | 5 | name_a_part3 | 2
^- sum() = maxquantity or the rest of name_a
5 | name_b| 7 | 4 | name_b_part1 | 4
^- sum() = maxquantity
5 | name_b| 7 | 4 | name_b_part2 | 3
6 | name_b| 2 | 4 | name_b_part2 | 1
^- sum() = maxquantity
6 | name_b| 2 | 4 | name_b_part3 | 1
^- sum() = maxquantity or the rest of name_b
One pretty simple method is to explode the data into a separate row for each item, calculate the bins at that level, and then reaggregate:
with cte (id, name, quantity, maxquantity, n) as (
select id, name, quantity, maxquantity, 1 as n
from t
union all
select id, name, quantity, maxquantity, n + 1
from cte
where n < quantity
)
select id, name, quantity, maxquantity,
count(*) as number_in_bin,
ceil(bin_counter / maxquantity) as bin_number
from (select cte.*,
row_number() over (partition by name order by id, n) as bin_counter
from cte
) cte
group by id, name, quantity, maxquantity, ceil(bin_counter / maxquantity)
order by id, bin_number;
Here is a db<>fiddle.
You can do it with a single recursive query using analytic functions:
WITH split_bins (id, name, quantity, maxquantity, tag, effective_quantity, prev_quantity) AS (
SELECT id,
name,
quantity,
maxquantity,
FLOOR(
COALESCE(
SUM(quantity) OVER (
PARTITION BY name
ORDER BY id
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
),
0
)
/ maxquantity
) + 1,
LEAST(
maxquantity
- MOD(
COALESCE(
SUM(quantity) OVER (
PARTITION BY name
ORDER BY id
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
),
0
),
maxquantity
),
quantity
),
0
FROM articles
UNION ALL
SELECT id,
name,
quantity,
maxquantity,
tag + 1,
LEAST(
quantity - prev_quantity - effective_quantity,
maxquantity
),
prev_quantity + effective_quantity
FROM split_bins
WHERE prev_quantity + effective_quantity < quantity
)
SEARCH DEPTH FIRST BY id SET id_order
SELECT id,
name,
quantity,
maxquantity,
name || '_part' || tag AS tag,
effective_quantity
FROM split_bins;
Which, for the sample data:
CREATE TABLE articles (id, name, quantity, maxquantity ) AS
SELECT 1, 'name_a', 3, 5 FROM DUAL UNION ALL
SELECT 2, 'name_a', 1, 5 FROM DUAL UNION ALL
SELECT 3, 'name_a', 3, 5 FROM DUAL UNION ALL
SELECT 4, 'name_a', 5, 5 FROM DUAL UNION ALL
SELECT 5, 'name_b', 7, 4 FROM DUAL UNION ALL
SELECT 6, 'name_b', 2, 4 FROM DUAL UNION ALL
SELECT 7, 'name_c', 6, 2 FROM DUAL UNION ALL
SELECT 8, 'name_c', 2, 2 FROM DUAL;
Outputs:
ID
NAME
QUANTITY
MAXQUANTITY
TAG
EFFECTIVE_QUANTITY
1
name_a
3
5
name_a_part1
3
2
name_a
1
5
name_a_part1
1
3
name_a
3
5
name_a_part1
1
3
name_a
3
5
name_a_part2
2
4
name_a
5
5
name_a_part2
3
4
name_a
5
5
name_a_part3
2
5
name_b
7
4
name_b_part1
4
5
name_b
7
4
name_b_part2
3
6
name_b
2
4
name_b_part2
1
6
name_b
2
4
name_b_part3
1
7
name_c
6
2
name_c_part1
2
7
name_c
6
2
name_c_part2
2
7
name_c
6
2
name_c_part3
2
8
name_c
2
2
name_c_part4
2
db<>fiddle here

SQL Server dynamically sum rows based on other rows

Here's the situation: I need a way to total sales of a certain class of item every month. Easy enough, right?
Except sometimes, the item will be suppressed (with 0 price) and a special item will be put on the order with the price. I solved this by looking for suppressed lines and using LAG to pull the price from the special item on the line below it:
CASE
WHEN olu.supress_print = 'Y'
THEN LAG(shrv.sales_price_home, 1, 0) OVER (ORDER BY shrv.order_no, pvol.line_seq_no DESC)
ELSE shrv.sales_price_home
END AS total_sales
However, I recently discovered that sometimes they will split the suppressed item into multiple "special" lines. I'm trying to dynamically sum rows of certain trigger items until the row below the trigger item contains a non-special item. I'll illustrate with a table:
item_id
qty_ordered
tot_price
line_seq
suppress_print
A
10
150
1
N
B
10
0
2
Y
SPECIAL
4
140
3
N
SPECIAL
6
90
4
N
SPECIAL
8
70
8
N
SPECIAL
6
80
9
N
So in this example, I'd like the prices for lines 2, 3, and 4 summed and rolled into one line. I really only need the total price and ideally to be able to preserve item id "B".
I'm trying to think of a way to solve this using exclusively SQL. I know I could write a script to do it, but I'd like to limit this to just SQL if possible.
Edit - unfiltered table (imagine 2 is the item class I want the sum of sales for):
item_id
qty_ordered
tot_price
line_seq
suppress_print
class
A
10
150
1
N
2
B
10
0
2
Y
2
SPECIAL
4
140
3
N
NULL
SPECIAL
6
90
4
N
NULL
C
5
80
5
N
NULL
D
3
50
6
N
NULL
D
14
0
7
N
NULL
SPECIAL
8
70
8
N
NULL
SPECIAL
6
80
9
N
NULL
Edit 2 - expected results:
item_id
qty_ordered
tot_price
line_seq
suppress_print
class
A
10
150
1
N
2
B
10
230
2
Y
2
C
5
80
5
N
NULL
D
3
50
6
N
NULL
D
14
0
7
N
NULL
SPECIAL
8
70
8
N
NULL
SPECIAL
6
80
9
N
NULL
Here's something based on your unfiltered table.
I didn't attempt to limit the logic to a specific class.
But that could be added easily, at the end, or as needed.
I also didn't really need the suppress_print column in the logic.
We could also easily exclude the 'D' items from the SPECIAL logic. Based on the summed qty values and the 0 tot_price, I guessed we should treat them specially too. That's easily adjusted.
We handle this much like an edges case, creating groups in the first groups CTE term.
Then, in the sums CTE term, use these groups to combine / SUM the SPECIAL rows within their groups / partitions. The rows associated with non-SPECIAL cases are in their own group, so can be summed as well.
The final query expression just takes the edge rows, which causes the SPECIAL rows to be hidden and the leading item_id shown only, as requested.
Here's the SQL Server test case:
Working Test Case (Updated)
and the corresponding solution:
WITH groups AS (
SELECT t.*
, SUM(CASE WHEN item_id <> 'SPECIAL' THEN 1 END) OVER (ORDER BY line_seq) AS seq
, CASE WHEN item_id <> 'SPECIAL' THEN 1 END AS edge
FROM unfiltered AS t
)
, sums AS (
SELECT item_id, qty_ordered
, line_seq, suppress_print, class
, SUM(tot_price) OVER (PARTITION BY seq) AS tot_price
, edge
FROM groups
)
SELECT item_id, qty_ordered, tot_price
, line_seq, suppress_print, class
FROM sums
WHERE edge = 1
;
Result:
+---------+-------------+-----------+----------+----------------+-------+
| item_id | qty_ordered | tot_price | line_seq | suppress_print | class |
+---------+-------------+-----------+----------+----------------+-------+
| A | 10 | 150 | 1 | N | 2 |
| B | 10 | 230 | 2 | Y | 2 |
| C | 5 | 80 | 5 | N | NULL |
| D | 3 | 50 | 6 | N | NULL |
| D | 14 | 150 | 7 | N | NULL |
+---------+-------------+-----------+----------+----------------+-------+
Both 'B' and the second 'D' item are summed as described in the question description.
The data in the unfiltered table:
+---------+-------------+-----------+----------+----------------+-------+
| item_id | qty_ordered | tot_price | line_seq | suppress_print | class |
+---------+-------------+-----------+----------+----------------+-------+
| A | 10 | 150 | 1 | N | 2 |
| B | 10 | 0 | 2 | Y | 2 |
| SPECIAL | 4 | 140 | 3 | N | NULL |
| SPECIAL | 6 | 90 | 4 | N | NULL |
| C | 5 | 80 | 5 | N | NULL |
| D | 3 | 50 | 6 | N | NULL |
| D | 14 | 0 | 7 | N | NULL |
| SPECIAL | 8 | 70 | 8 | N | NULL |
| SPECIAL | 6 | 80 | 9 | N | NULL |
+---------+-------------+-----------+----------+----------------+-------+
and the following actually produces the explicit requested result.
I haven't tried to reduce this. The requirement to restrict the behavior to a specific class added work. There were a couple of places I could have re-stated expressions to avoid additional CTE terms. Feel free to collapse them.
I also regenerated the groups (seq) a second time, once the main class logic was handled.
WITH groups AS (
SELECT t.*
, SUM(CASE WHEN item_id <> 'SPECIAL' THEN 1 END) OVER (ORDER BY line_seq) AS seq
, CASE WHEN item_id <> 'SPECIAL' THEN 1 END AS edge
FROM unfiltered AS t
)
, classes AS (
SELECT item_id, qty_ordered, tot_price
, line_seq, suppress_print
, edge, seq
, MAX(class) OVER (PARTITION BY seq) AS class
FROM groups
)
, edges AS (
SELECT item_id, qty_ordered, tot_price
, line_seq, suppress_print
, class
, CASE WHEN edge = 1 OR class IS NULL THEN 1 END AS edge
, SUM(CASE WHEN edge = 1 OR class IS NULL THEN 1 END) OVER (ORDER BY line_seq) AS seq
FROM classes
)
, sums AS (
SELECT item_id, qty_ordered
, line_seq, suppress_print, class
, SUM(tot_price) OVER (PARTITION BY seq) AS tot_price
, edge
FROM edges
)
SELECT item_id, qty_ordered, tot_price
, line_seq, suppress_print, class
FROM sums
WHERE edge = 1
;
Result:
+---------+-------------+-----------+----------+----------------+-------+
| item_id | qty_ordered | tot_price | line_seq | suppress_print | class |
+---------+-------------+-----------+----------+----------------+-------+
| A | 10 | 150 | 1 | N | 2 |
| B | 10 | 230 | 2 | Y | 2 |
| C | 5 | 80 | 5 | N | NULL |
| D | 3 | 50 | 6 | N | NULL |
| D | 14 | 0 | 7 | N | NULL |
| SPECIAL | 8 | 70 | 8 | N | NULL |
| SPECIAL | 6 | 80 | 9 | N | NULL |
+---------+-------------+-----------+----------+----------------+-------+
Using APPLY to get parent info for 'SPECIAL's of item with suppress_print = 'Y'
WITH grp AS (
SELECT -- all but tot_price from parent
coalesce(parent.item_id, itm.item_id) item_id,
coalesce(parent.qty_ordered, itm.qty_ordered) qty_ordered,
itm.tot_price,
coalesce(parent.line_seq, itm.line_seq) line_seq,
coalesce(parent.suppress_print, itm.suppress_print) suppress_print,
coalesce(parent.class, itm.class) class
FROM myTbl itm
OUTER APPLY (
SELECT t3.*
FROM (
SELECT top(1) t2.*
FROM myTbl t2
WHERE itm.item_id = 'SPECIAL' AND t2.line_seq < itm.line_seq AND t2.item_id != 'SPECIAL'
ORDER BY line_seq DESC
) t3
WHERE t3.suppress_print = 'Y'
) parent
)
select item_id, qty_ordered, sum(tot_price) tot_price, line_seq, suppress_print, class
from grp
group by item_id, qty_ordered, line_seq, suppress_print, class
order by line_seq

How to increment the counting for each non-consecutive value?

Below is a simple representation of my table:
ID | GA
----------
1 | 1.5
2 | 1.5
3 | 1.2
4 | 1.5
5 | 1.3
I would like to count the number of occurrence of the GA column's values BUT the count should not increment when the value is the same as the next row.
What I would like to expect is like this:
ID | GA | COUNT
-------------------
1 | 1.5 | 1
2 | 1.5 | 1
3 | 1.2 | 1
4 | 1.5 | 2
5 | 1.3 | 1
Notice that GA = 1.5 count is 2. This is because there is a row between ID 2 & 4 that breaks the succession of 1.5.
NOTE: The ordering by ID also matters.
Here's what I've done so far:
SELECT ID,GA,COUNT (*) OVER (
PARTITION BY GA
ORDER BY ID
) COUNT
FROM (
SELECT 1 AS ID,'1.5' AS GA
FROM DUAL
UNION
SELECT 2,'1.5' FROM DUAL
UNION
SELECT 3,'1.2' FROM DUAL
UNION
SELECT 4,'1.5' FROM DUAL
UNION
SELECT 5,'1.3' FROM DUAL
) FOO
ORDER BY ID;
But the result is far from expectation:
ID | GA | COUNT
-------------------
1 | 1.5 | 1
2 | 1.5 | 2
3 | 1.2 | 1
4 | 1.5 | 3
5 | 1.3 | 1
Notice that even if they are consecutive values, the count is still incrementing.
It seems, that you are asking for a kind of a running total, not just a global count.
Assuming, that the input data is in a table named input_data, this should do the trick:
WITH
with_previous AS (
SELECT id, ga, LAG(ga) OVER (ORDER BY id) AS previous_ga
FROM input_data
),
just_new AS (
SELECT id,
ga,
CASE
WHEN previous_ga IS NULL
OR previous_ga <> ga
THEN ga
END AS new_ga
FROM with_previous
)
SELECT id,
ga,
COUNT(new_ga) OVER (PARTITION BY ga ORDER BY id) AS ga_count
FROM just_new
ORDER BY 1
See sqlfiddle: http://sqlfiddle.com/#!4/187e13/1
Result:
ID | GA | GA_COUNT
----+-----+----------
1 | 1.5 | 1
2 | 1.5 | 1
3 | 1.2 | 1
4 | 1.5 | 2
5 | 1.3 | 1
6 | 1.5 | 3
7 | 1.5 | 3
8 | 1.3 | 2
I took sample data from #D-Shih's sqlfiddle
As I understand the problem, this is a variation of a gaps-and-islands problem. You want to enumerate the groups for each ga value independently.
If this interpretation is correct, then I would go for dense_rank() and the difference of row numbers:
select t.*, dense_rank() over (partition by ga order by seqnum_1 - seqnum_2)
from (select t.*,
row_number() over (order by id) as seqnum_1,
row_number() over (partition by ga order by id) as seqnum_2
from t
) t
order by id;
Here is a rextester.
Use a subquery with LAG and SUM anlytic functions:
SELECT id, ga,
sum( cnt ) over (partition by ga order by id) as cnt
FROM (
select t.*,
case lag(ga) over (order by id)
when ga then 0 else 1
end cnt
from Tab t
)
order by id
| ID | GA | CNT |
|----|-----|-----|
| 1 | 1.5 | 1 |
| 2 | 1.5 | 1 |
| 3 | 1.2 | 1 |
| 4 | 1.5 | 2 |
| 5 | 1.3 | 1 |
Demo: http://sqlfiddle.com/#!4/5ddd1/5

Oracle SQL, how to select * having distinct columns

I want to have a query something like this (this doesn't work!)
select * from foo where rownum < 10 having distinct bar
Meaning I want to select all columns from ten random rows with distinct values in column bar. How to do this in Oracle?
Here is an example. I have the following data
| item | rate |
-------------------
| a | 50 |
| a | 12 |
| a | 26 |
| b | 12 |
| b | 15 |
| b | 45 |
| b | 10 |
| c | 5 |
| c | 15 |
And result would be for example
| item no | rate |
------------------
| a | 12 | --from (26 , 12 , 50)
| b | 45 | --from (12 ,15 , 45 , 10)
| c | 5 | --from (5 , 15)
Aways having distinct item no
SQL Fiddle
Oracle 11g R2 Schema Setup:
Generate a table with 12 items A - L each with rates 0 - 4:
CREATE TABLE items ( item, rate ) AS
SELECT CHR( 64 + CEIL( LEVEL / 5 ) ),
MOD( LEVEL - 1, 5 )
FROM DUAL
CONNECT BY LEVEL <= 60;
Query 1:
SELECT item,
rate
FROM (
SELECT i.*,
-- Give the rates for each item a unique index assigned in a random order
ROW_NUMBER() OVER ( PARTITION BY item ORDER BY DBMS_RANDOM.VALUE ) AS rn
FROM items i
ORDER BY DBMS_RANDOM.VALUE -- Order all the rows randomly
)
WHERE rn = 1 -- Only get the first row for each item
AND ROWNUM <= 10 -- Only get the first 10 items.
Results:
| ITEM | RATE |
|------|------|
| A | 0 |
| K | 2 |
| G | 4 |
| C | 1 |
| E | 0 |
| H | 0 |
| F | 2 |
| D | 3 |
| L | 4 |
| I | 1 |
I mention table create and query for distinct and top 10 rows;
(Ref SqlFiddle)
create table foo(item varchar(20), rate int);
insert into foo values('a',50);
insert into foo values('a',12);
insert into foo values('a',26);
insert into foo values('b',12);
insert into foo values('b',15);
insert into foo values('b',45);
insert into foo values('b',10);
insert into foo values('c',5);
insert into foo values('c',15);
--Here first get the distinct item and then filter row number wise rows:
select item, rate from (
select item, rate, ROW_NUMBER() over(PARTITION BY item ORDER BY rate desc)
row_num from foo
) where row_num=1;

PL SQL Recursive Query

Please find below tables
EVENT table
event_id | gross_amount | transaction_id
1 | 10 | 1
2 | 12 | 5
TRANSACTION table
trx_id | debit | credit | type | original_trx_id | last_updated
1 | 0 | 0 | payment | null | 25-JUL-11
2 | 0 | 2 | settlement | 1 | 26-JUL-11
3 | 0 | 1 | settlement | 1 | 27-JUL-11
4 | 3 | 0 | settlement | 1 | 28-JUL-11
5 | 0 | 0 | payment | null | 24-JUL-11
6 | 0 | 3 | settlement | 5 | 25-JUL-11
RESULT EXPECTED:
trx_id | debit | credit | current_gross | current_net
2 | 0 | 2 | 10 | 12
3 | 0 | 1 | 12 | 13
4 | 3 | 0 | 12 | 9
6 | 0 | 3 | 10 | 13
Explanation
Transaction 1,2,3,4 falling into one set and transaction 5,6 falling into an another set. Each transaction set can be ordered using last updated column.
For the calculation we do not take the transactions type "payment". The "payment" transaction is linked to the event table. From where can find "original_gorss_amount" for calculation.
Steps
Find event table payment transaction from transaction table. ( Ex: transaction_id = 1, Also from that we can find original_gross_amount = 10 )
Take all the "settlement" transaction that has original_trx_id = 1
Order them based on last updated time.
Apply the calculation
Hope you have understood my question. I want to get the "RESULT EXPECTED" somehow using PL SQL ( Please no custom function)
I can not think a way to apply CONNECT BY here. Your help is highly appreciate.
Please find below create table and insert statements.
create table event
(event_id number(9),
gross_amount number(9),
transaction_id number(9) );
insert into event values (1,10,1);
insert into event values (2,10,5);
create table transaction
(trx_id number(9),
debit number(9),
credit number(9),
type varchar2(50),
original_trx_id number(9),
last_updated DATE
);
insert into transaction values (1,0,0,'payment',null,'2011-07-25');
insert into transaction values (2,0,2,'settlement',1,'2011-07-26');
insert into transaction values (3,0,1,'settlement',1,'2011-07-27');
insert into transaction values (4,3,0,'settlement',1,'2011-07-28');
insert into transaction values (5,0,0,'payment',null,'2011-07-24');
insert into transaction values (6,0,3,'settlement',5,'2011-07-25');
If I understand you question right you don't want a hierarchial or recursive query. Just an analytic sum with a windowing clause.
SELECT T1.trx_id
, T1.debit
, T1.credit
, E2.gross_amount
+ NVL( SUM( T1.credit ) OVER( PARTITION BY T1.original_trx_id
ORDER BY T1.last_updated
RANGE BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING ), 0 )
- NVL( SUM( T1.debit ) OVER( PARTITION BY T1.original_trx_id
ORDER BY T1.last_updated
RANGE BETWEEN UNBOUNDED PRECEDING
AND 1 PRECEDING ), 0 )
AS current_gross
, E2.gross_amount
+ SUM( T1.credit ) OVER( PARTITION BY T1.original_trx_id
ORDER BY T1.last_updated
RANGE BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW )
- SUM( T1.debit ) OVER( PARTITION BY T1.original_trx_id
ORDER BY T1.last_updated
RANGE BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW )
AS current_net
FROM g1_transaction T1
, g1_event E2
WHERE T1.original_trx_id = E2.transaction_id
ORDER BY T1.original_trx_id, T1.last_updated
NOTE: A few of problems in your question (or at least my understanding of it).
Should the 2nd insert into events set the gross_amount to be 12
Should the current_gross of trx_id 4 in the results be 13 (instead of 12) because it includes the 1 credit from trx_id 3. And thus the net should be 10 (instead of 9)
Should the current_gross of trx_id 6 be 12 (instead of 10) because this is the gross_amount of event 2. And thus the current_net would be 15 (instead of 13)
If these assumptions are correct then the query I provided gives these results.
TRX_ID DEBIT CREDIT CURRENT_GROSS CURRENT_NET
---------- ---------- ---------- ------------- -----------
2 0 2 10 12
3 0 1 12 13
4 3 0 13 10
6 0 3 12 15