I have some data look like this
id date total amount adj amount
1 2017-01-02 100 50
1 2017-01-02 50 0
2 2017-01-15 100 35
2 2017-01-15 35 0
3 2017-01-30 120 50
3 2017-01-30 -120 -50
3 2017-01-30 100 50
3 2017-01-30 50 0
3 2017-01-30 60 40
the output should look like, I have no clue how to do the subtraction between rows and columns.
id date due amount
1 2017-01-02 0
2 2017-01-15 0
3 2017-01-30 40
here is my current code, but it only works on maybe 1 and 2 but definitely not working for 3.
the logic for this part is to find the due amount between each entry for each id. for example, id 1 has two entry, total amount 100, then he paid 50, so the adj amount is 50, and the second entry, the total amount is 50, he paid 50, te adj amount is 0. so id 1 due amount is 0 in the end.
id 3 who has 5 entries, first there is entry show the total amount for ID 3 is 120 and he paid 70, so the adj amount is 50, but the first entry is a mistake, so all amount revised. then the third entry shows the total amount is 100, ID 3 paid 50, so the adj amount is 50. then the fourth entry shows the total amount is 50, ID 3 also paid 50, so the adj amount is 0. and the fifth entry shows that the total amount is 60, and ID 3 paid 20, so the adj amount is 40. so in final, ID 3 due amount is 40;
select distinct a.id,
a.date,
case when a.date=b.date and a.total_amount = b.adj_amount then a.adj_amount
when a.date=b.date and a.total_amount <> b.adj_amount then ABS(a.adj_amount + b.adj_amount)
else a.adj_amount
end as due_amount
from table a,
table b
where a.id=b.id;
I just wonder if there has any function which can do this kind of calculation between rows and columns.
Use GROUP BY and SUM().
SELECT the_date, SUM(due_amount)
FROM tab
GROUP BY the_date;
Something like this could work - if the transactions can be ordered. Note that I've renamed some of the columns to help clarify their meaning. I've also added a trans_seq_num column to indicate the order of a customer's transactions on a particular date. I think you're looking for the amount that the customer still owes as of their last payment.
WITH sample (id, trans_seq_num, some_date, starting_balance, ending_balance) AS
(
SELECT '1',1,'2017-01-02','100','50' FROM dual UNION ALL
SELECT '1',2,'2017-01-02','50','0' FROM dual UNION ALL
SELECT '2',1,'2017-01-15','35','0' FROM dual UNION ALL
SELECT '2',2,'2017-01-15','100','35' FROM dual UNION ALL
SELECT '3',1,'2017-01-30','120','50' FROM dual UNION ALL
SELECT '3',2,'2017-01-30','-120','-50' FROM dual UNION ALL
SELECT '3',3,'2017-01-30','100','50' FROM dual UNION ALL
SELECT '3',4,'2017-01-30','50','0' FROM dual UNION ALL
SELECT '3',5,'2017-01-30','60','40' FROM dual
)
SELECT DISTINCT id,
some_date,
LAST_VALUE(ending_balance) OVER (PARTITION BY id ORDER BY trans_seq_num RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) day_balance
FROM sample
ORDER BY 1,2,3;
ID SOME_DATE AMOUNT_DUE
----- --------------- ---------------
1 2017-01-02 0
2 2017-01-15 35
3 2017-01-30 40
The others already said: you should have any way of numbering rows. Simple sequence will do the job. With such unique column solution is trivial, we only find last row for each id.
But you have no order. Here is my try which looks OK so far and may temporary help:
with q as (
select table_a.*,
row_number() over (partition by id, date_, total_amount, adj_amount
order by null) rn
from table_a),
t as (
select a.*,
row_number() over (partition by id, date_, total_amount
order by null) r1,
row_number() over (partition by id, date_, adj_amount
order by null) r2
from q a
where not exists (
select 1 from q b
where a.id = b.id and a.date_ = b.date_ and a.rn = b.rn
and a.total_amount = -b.total_amount and a.adj_amount = -b.adj_amount))
select id, date_, max(adj_amount) due
from t
where connect_by_isleaf = 1
connect by prior id = id and prior date_ = date_
and prior adj_amount = total_amount and prior r2 = r1
group by id, date_;
dbfiddle
First I eliminate mistakes. Subquery t does this, it is simple not exists with added row_number to handle properly multiple cases ( like (120, 50) => (-120, -50) and again (120, 50) ).
Data is cleared so we can recursively find connected rows by previous adj_amount = total_amount. We have to use row_numbers again to handle identical rows (60, 40) => (40, 0) => (60, 40) again.
Then only leaves are taken and finally max value of these leaves which should contain orphaned non zero value if such exists for each id. You can add connect_by_path() clause to see if connection works properly.
Hierarchical queries are slower than others, so if your table is big, be warned. Filter data at first, if needed.
This query works for your examples and some others which I imagined and tested. But even if it works you should add ordering column (if possible) and have guaranteed, simple way to obtain correct results.
Related
Table "client_orders":
date
ordered
id
28.05
50
1
23.06
60
2
24.05
50
1
25.06
130
2
Table "stock":
id
amount
date
1
60
23.04
2
90
25.04
1
10
24.04
2
10
24.06
I want to calculate the amount I need to order (to fulfill the stock) for what date. For instance, it should be:
30 by 28.05 (60+10-50-50=-30) for id = 1
-90 by 25.06 (90-60+10-130=-90) for id = 2
I tried to do it with LAG function, but the problem is that the stock here is not updating.
SELECT *,
SUM(amount - ordered) OVER (PARTITION BY sd.id ORDER BY d.date ASC)
FROM stock sd
LEFT JOIN (SELECT date,
id,
ordered
FROM client_orders) AS d
ON sd.id = d.id
Couldn't find anything similar on the web. Grateful if you share articles/examples how to do that.
You could make a union of the two tables and sum all stock amounts with the negative of ordered amounts. For the date you could instead take the corresponding maximum value.
SELECT id,
SUM(amount),
MAX(date)
FROM (SELECT id,
-ordered AS amount,
date
FROM client_orders
UNION
SELECT *
FROM stock
) stock_and_orders
GROUP BY id
Try it here.
Now I have a table and I am trying to calculate for each book_id the total sales in the past 100 days for every day in the past 1 year.
book_id location seller daily_sales order_day
ABC 1 XYZ 100 2017-05-05
ABC 1 XYZ 120 2017-05-07
ABC 1 XYZ 40 2017-02-10
.
.
.
So what I am trying to expect in the result is:
book_id order_day sum
ABC 2017-05-05 100+40
ABC 2017-05-07 100+120+40
ABC 2017-02-10 40
For this I wrote a query like this:
select book_id, to_char(order_day),
SUM(case when order_day between order_day -100 and order_day then daily_sales else 0 end) sum
FROM bookDetailsTable
where location = 1 AND ORDER_DAY BETWEEN TO_DATE('20170725','YYYYMMDD') - 359 AND TO_DATE('20170725','YYYYMMDD')
group by seller, book_id, order_day
I guess I am doing wrong and I should write a select statement within the SUM statement to select data for the past 100 days.
You should get the result with this
select A.book_id,
A.order_day,
( select sum(b.daily_sales)
from bookDetailsTable b
where A.book_id = B.book_id
and B.order_day between A.order_day -100 and A.order_day
)
from bookDetailsTable A
where A.order_day between ADD_MONTHS(trunc(sysdate),-12) and trunc(sysdate)
If you understand the principle of the query, you should be able to add your other restrictions, like seller or location
This is a perfect case for using analytic functions, specifically the SUM() analytic function, along with the windowing clause:
WITH bookdetailstable AS (SELECT 'ABC' book_id, 1 LOCATION, 'XYZ' seller, 100 daily_sales, to_date('05/05/2016', 'dd/mm/yyyy') order_day FROM dual UNION ALL
SELECT 'ABC' book_id, 1 LOCATION, 'XYZ' seller, 120 daily_sales, to_date('07/05/2016', 'dd/mm/yyyy') order_day FROM dual UNION ALL
SELECT 'ABC' book_id, 1 LOCATION, 'XYZ' seller, 40 daily_sales, to_date('10/02/2016', 'dd/mm/yyyy') order_day FROM dual UNION ALL
SELECT 'ABC' book_id, 1 LOCATION, 'XYZ' seller, 600 daily_sales, to_date('10/02/2017', 'dd/mm/yyyy') order_day FROM dual)
SELECT book_id,
to_char(order_day, 'yyyy-mm-dd') order_day,
total_sales_last_100_days
FROM (SELECT book_id,
order_day,
SUM(daily_sales) OVER (PARTITION BY book_id ORDER BY order_day
RANGE BETWEEN 100 PRECEDING AND CURRENT ROW) total_sales_last_100_days
FROM bookdetailstable
where order_day >= add_months(trunc(sysdate) - 100, -12))
where order_day >= add_months(trunc(SYSDATE), -12);
BOOK_ID ORDER_DAY TOTAL_SALES_LAST_100_DAYS
------- ---------- -------------------------
ABC 2016-02-10 40
ABC 2016-05-05 140
ABC 2016-05-07 260
ABC 2017-02-10 600
This simply says get the sum of daily_sales for each book_id (you can think of the partition by clause as being similar to the group by clause - it simply defines the group of rows the function applies over) ordered by the order_day, looking at the 100 preceding rows and the current row.
If you needed to work out the cumulative sum for specific book_ids based on location (and seller and ....), then you would need to include the extra grouping columns in the partition by clause.
Since you want to restrict the results to the past year, assuming you want the first row to return the count for the past 100 days as well, rather than starting with the current day, you need to include 100 days prior to a year ago. Then you restrict the rows to the year's worth of data you're interested in.
That's because analytic functions work across the data after it's been filtered by the where clause, so if you want to include data from outside the current where clause, you're going to have to look for a way to include those rows and then do the additional filtering later.
I am looking for a query where a certain amount gets distributed to each invoice below based on the account_num and item_order. Also, if partial_payment_allowed is set to 'N' then distribution of the above amount should only happen if the distributed amount is greater than the invoice_amt else it should skip the row and carry on to next invoice of the account.
Item_order inv_amount Partial_pmt account_num cr_amt
1 1256 Y 12 1000
2 1134 Y 12 1000
1 800 Y 13 1200
2 200 N 13 1200
3 156 N 13 1200
In above data, each account has a cr_amt which can be distributed according to item_order. So after distribution result would be
account_num Item_order inv_amount Partial_pmt Dist_amt Bal_amt
12 1 1256 Y 1000 256
12 2 1134 Y 256 878
13 1 800 Y 800 400
13 2 200 N 200 200
13 3 156 N 100 100
We are trying to avoid loops, any comments are highly appreciated.Thank you.
Extending the answer to this question:
payment distrubution oracle sql query
You can still use the SQL MODEL clause. In this version, you need separate calculations for each distinct account_num. You can achieve this using the PARTITION keyword of the SQL MODEL clause to partition by account_num.
Like this (see SQL comments for step-by-step explanation):
-- Set up test data (since I don't have your table)
WITH inv_raw (item_order, inv_amount, partial_pmt_allowed, account_num, cr_amt) AS (
SELECT 1, 1256, 'Y', 12, 1000 FROM DUAL UNION ALL
SELECT 2, 1134, 'Y', 12, 1000 FROM DUAL UNION ALL
SELECT 3, 800, 'Y', 13, 1200 FROM DUAL UNION ALL
SELECT 4, 200, 'N',13, 1200 FROM DUAL UNION ALL
SELECT 5, 156, 'N',13, 1200 FROM DUAL),
-- Ensure that the column we are ordering by is densely populated
inv_dense (dense_item_order, item_order, inv_amount, partial_pmt_allowed, account_num, cr_amt) AS
( SELECT DENSE_RANK() OVER ( PARTITION BY account_num ORDER BY item_order ), item_order, inv_amount, partial_pmt_allowed, account_num, cr_amt FROM inv_raw )
-- Give us a way to input the payment amount
--param AS ( SELECT 1100 p_payment_amount FROM DUAL )
-- The actual query starts here
SELECT
account_num,
item_order,
inv_amount,
partial_pmt_allowed,
applied dist_amount,
remaining_out balance_amt,
cr_amt
FROM inv_dense
MODEL
-- We want a completely separate calculation for each distinct account_num
PARTITION BY ( account_num )
-- We'll output one row for each value of dense_item_order.
-- We made item_order "dense" so we can do things like CV()-1 to get the
-- previous row's values.
DIMENSION BY ( dense_item_order )
MEASURES ( cr_amt, item_order, inv_amount,
partial_pmt_allowed, 0 applied,
0 remaining_in, 0 remaining_out )
RULES AUTOMATIC ORDER (
-- The amount carried into the first row is the payment amount
remaining_in[1] = cr_amt[1],
-- The amount carried into subsequent rows is the amount we carried out of the prior row
remaining_in[dense_item_order > 1] = remaining_out[CV()-1],
-- The amount applied depends on whether the amount remaining can cover the invoice
-- and whether partial payments are allowed
applied[ANY] = CASE WHEN remaining_in[CV()] >= inv_amount[CV()] OR partial_pmt_allowed[CV()] = 'Y' THEN LEAST(inv_amount[CV()], remaining_in[CV()]) ELSE 0 END,
-- The amount we carry out is the amount we brought in minus what we applied
remaining_out[ANY] = remaining_in[CV()] - applied[CV()]
)
ORDER BY account_num, item_order;
I have two sets of pricing data (A and B). Set A consists of all of my pricing data per order over a month. Set B consists of all of my competitor's pricing data over the same month. I want to compare my competitor's lowest price to each of my prices per day.
Graphically, the data appears like this:
Date:-- Set A: -- Set B:
1---------25---------31
1---------54---------47
1---------23---------56
1---------12---------23
1---------76---------40
1---------42
I want pass only the lowest price to a case statement which evaluates which prices are better. I would like to process an entire month's worth of data all at one time, so in my example, Dates 1 thru 30(1) would be included and crunched all at once, and for each day, there would only be one value from set B included: the lowest price in the set.
Important notes: Set B does not have a datapoint for each point in Set A
Hopefully this makes sense. Thanks in advance for any help you may be able to render.
That's a strange example you have - do you really have prices ranging from 12 to 76 within a single day?
Anyway, left joining your (grouped) data with their (grouped) data should work (untested):
with
my_prices as (
select price_date, min(price_value) min_price from my_prices group by price_date),
their_prices as (
select price_date, min(price_value) min_price from their_prices group by price_date)
select
mine.price_date,
(case
when theirs.min_price is null then mine.min_price
when theirs.min_price >= mine.min_price then mine.min_price
else theirs.min_price
end) min_price
from
my_min_prices mine
left join their_prices theirs on mine.price_date = theirs.price_date
I'm still not sure that I understand your requirements. My best guess is that you want something like
SQL> ed
Wrote file afiedt.buf
1 with your_data as (
2 select 1 date_id, 25 price_a,31 price_b from dual
3 union all
4 select 1, 54, 47 from dual union all
5 select 1, 23, 56 from dual union all
6 select 1, 12, 23 from dual union all
7 select 1, 76, 40 from dual union all
8 select 1, 42, null from dual)
9 select date_id,
10 sum( case when price_a < min_price_b
11 then 1
12 else 0
13 end) better,
14 sum( case when price_a = min_price_b
15 then 1
16 else 0
17 end) tie,
18 sum( case when price_a > min_price_b
19 then 1
20 else 0
21 end) worse
22 from( select date_id,
23 price_a,
24 min(price_b) over (partition by date_id) min_price_b
25 from your_data )
26* group by date_id
SQL> /
DATE_ID BETTER TIE WORSE
---------- ---------- ---------- ----------
1 1 1 4
Gday, I have a table that shows a series of scores and datetimes those scores occurred.
I'd like to select the maximum of these scores for each day, but display the datetime that the score occurred.
I am using an Oracle database (10g) and the table is structured like so:
scoredatetime score (integer)
---------------------------------------
01-jan-09 00:10:00 10
01-jan-09 01:00:00 11
01-jan-09 04:00:01 9
...
I'd like to be able to present the results such the above becomes:
01-jan-09 01:00:00 11
This following query gets me halfway there.. but not all the way.
select
trunc(t.scoredatetime), max(t.score)
from
mytable t
group by
trunc(t.scoredatetime)
I cannot join on score only because the same high score may have been achieved multiple times throughout the day.
I appreciate your help!
Simon Edwards
with mytableRanked(d,scoredatetime,score,rk) as (
select
scoredatetime,
score,
row_number() over (
partition by trunc(scoredatetime)
order by score desc, scoredatetime desc
)
from mytable
)
select
scoredatetime,
score
from mytableRanked
where rk = 1
order by date desc
In the case of multiple high scores within a day, this returns the row corresponding to the one that occurred latest in the day. If you want to see all highest scores in a day, remove scoredatetime desc from the order by specification in the row_number window.
Alternatively, you can do this (it will list ties of high score for a date):
select
scoredatetime,
score
from mytable
where not exists (
select *
from mytable as M2
where trunc(M2.scoredatetime) = trunc(mytable.scoredatetime)
and M2.score > mytable.scoredatetime
)
order by scoredatetime desc
First of all, you did not yet specify what should happen if two or more rows within the same day contain an equal high score.
Two possible answers to that question:
1) Just select one of the scoredatetime's, it doesn't matter which one
In this case don't use self joins or analytics as you see in the other answers, because there is a special aggregate function that can do your job more efficient. An example:
SQL> create table mytable (scoredatetime,score)
2 as
3 select to_date('01-jan-2009 00:10:00','dd-mon-yyyy hh24:mi:ss'), 10 from dual union all
4 select to_date('01-jan-2009 01:00:00','dd-mon-yyyy hh24:mi:ss'), 11 from dual union all
5 select to_date('01-jan-2009 04:00:00','dd-mon-yyyy hh24:mi:ss'), 9 from dual union all
6 select to_date('02-jan-2009 00:10:00','dd-mon-yyyy hh24:mi:ss'), 1 from dual union all
7 select to_date('02-jan-2009 01:00:00','dd-mon-yyyy hh24:mi:ss'), 1 from dual union all
8 select to_date('02-jan-2009 04:00:00','dd-mon-yyyy hh24:mi:ss'), 0 from dual
9 /
Table created.
SQL> select max(scoredatetime) keep (dense_rank last order by score) scoredatetime
2 , max(score)
3 from mytable
4 group by trunc(scoredatetime,'dd')
5 /
SCOREDATETIME MAX(SCORE)
------------------- ----------
01-01-2009 01:00:00 11
02-01-2009 01:00:00 1
2 rows selected.
2) Select all records with the maximum score.
In this case you need analytics with a RANK or DENSE_RANK function. An example:
SQL> select scoredatetime
2 , score
3 from ( select scoredatetime
4 , score
5 , rank() over (partition by trunc(scoredatetime,'dd') order by score desc) rnk
6 from mytable
7 )
8 where rnk = 1
9 /
SCOREDATETIME SCORE
------------------- ----------
01-01-2009 01:00:00 11
02-01-2009 00:10:00 1
02-01-2009 01:00:00 1
3 rows selected.
Regards,
Rob.
You might need two SELECT statements to pull this off: the first to collect the truncated date and associated max score, and the second to pull in the actual datetime values associated with the score.
Try:
SELECT T.ScoreDateTime, T.Score
FROM
(
SELECT
TRUNC(T.ScoreDateTime) ScoreDate, MAX(T.score) BestScore
FROM
MyTable T
GROUP BY
TRUNC(T.ScoreDateTime)
) ByDate
INNER JOIN MyTable T
ON TRUNC(T.ScoreDateTime) = ByDate.ScoreDate and T.Score = ByDate.BestScore
ORDER BY T.ScoreDateTime DESC
This will pull in best score ties as well.
For a version which selects only the most recently-posted high score for each day:
SELECT T.ScoreDateTime, T.Score
FROM
(
SELECT
TRUNC(T.ScoreDateTime) ScoreDate,
MAX(T.score) BestScore,
MAX(T.ScoreDateTime) BestScoreTime
FROM
MyTable T
GROUP BY
TRUNC(T.ScoreDateTime)
) ByDate
INNER JOIN MyTable T
ON T.ScoreDateTime = ByDate.BestScoreTime and T.Score = ByDate.BestScore
ORDER BY T.ScoreDateTime DESC
This may produce multiple records per date if two different scores were posted at exactly the same time.