Select hours as columns from Oracle table - sql

I am working with an Oracle database table that is structured like this:
TRANS_DATE TRANS_HOUR_ENDING TRANS_HOUR_SUFFIX READING
1/1/2021 1 1 100
1/1/2021 2 1 105
... ... ... ...
1/1/2021 24 1 115
The TRANS_HOUR_SUFFIX is only used to track hourly readings on days where day light savings time ends (when there could be 2 hours with the same TRANS_HOUR value). This column is the bane of this database's design, however I'm trying to do something to select this data in a certain way. We need a report that columnizes this data based on the hour. Therefore, it would be structured like this (last day shows a day on which DST would end):
TRANS_DATE HOUR_1 HOUR_2_1 HOUR_2_2 ... HOUR_24
1/1/2021 100 105 0 ... 115
1/2/2021 112 108 0 ... 135
... ... ... ... ... ...
11/7/2021 117 108 107 ... 121
I have done something like this before with a PIVOT, however in this case I'm having trouble determining what I should do to account for the suffix. When DST ending happens, we have to account for this hour. I know that we can do this by selecting each hourly value individually with decode or case statements, but that is some messy code. Is there a cleaner way to do this?

You can include multiple source columns in the pivot for() and in() clauses, so you could do:
select *
from (
select trans_date,
trans_hour_ending,
trans_hour_suffix,
reading
from your_table
)
pivot (max(reading) for (trans_hour_ending, trans_hour_suffix)
in ((1, 1) as hour_1, (2, 1) as hour_2_1, (2, 2) as hour_2_2, (3, 1) as hour_3,
-- snip
(23, 1) as hour_23, (24, 1) as hour_24))
order by trans_date;
where every hour has a (24, 1) tuple, and the DST-relevant hour has an extra (2, 2) tuple.
If you don't have rows for every hour - which you don't appear to have form the very brief sample data, at least for suffix 2 for non-DST days - then you will get null results for those, but can replace them with zeros:
select trans_date,
coalesce(hour_1, 0) as hour_1,
coalesce(hour_2_1, 0) as hour_2_1,
coalesce(hour_2_2, 0) as hour_2_2,
coalesce(hour_3, 0) as hour_3,
-- snip
coalesce(hour_23, 0) as hour_23,
coalesce(hour_24, 0) as hour_24
from (
select trans_date,
trans_hour_ending,
trans_hour_suffix,
reading
from your_table
)
pivot (max(reading) for (trans_hour_ending, trans_hour_suffix)
in ((1, 1) as hour_1, (2, 1) as hour_2_1, (2, 2) as hour_2_2, (3, 1) as hour_3,
-- snip
(23, 1) as hour_23, (24, 1) as hour_24))
order by trans_date;
which with slightly expanded sample data gets:
TRANS_DATE HOUR_1 HOUR_2_1 HOUR_2_2 HOUR_3 HOUR_23 HOUR_24
---------- ---------- ---------- ---------- ---------- ---------- ----------
2021-01-01 100 105 0 0 0 115
2021-01-02 112 108 0 0 0 135
2021-11-07 117 108 107 0 0 121
Which is a bit long-winded when you have to include all 25 columns everywhere; but to avoid that you'd have to do a dynamic pivot.

Like I said in my comment, if you can format it with an additional row, I would recommend just having a row for the extra hour. Every other day would look normal. The query to do it would look like this:
CREATE TABLE READINGS
(
TRANS_DATE DATE,
TRANS_HOUR INTEGER,
TRANS_SUFFIX INTEGER,
READING INTEGER
);
INSERT INTO readings
SELECT TO_DATE('01/01/2021', 'MM/DD/YYYY'), 1, 1, 100 FROM DUAL UNION ALL
SELECT TO_DATE('01/01/2021', 'MM/DD/YYYY'), 2, 1, 100 FROM DUAL UNION ALL
SELECT TO_DATE('11/07/2021', 'MM/DD/YYYY'), 1, 1, 200 FROM DUAL UNION ALL
SELECT TO_DATE('11/07/2021', 'MM/DD/YYYY'), 1, 2, 300 FROM DUAL UNION ALL
SELECT TO_DATE('11/07/2021', 'MM/DD/YYYY'), 2, 1, 500 FROM DUAL UNION ALL
SELECT TO_DATE('11/07/2021', 'MM/DD/YYYY'), 2, 2, 350 FROM DUAL;
SELECT TRANS_DATE||DECODE(MAX(TRANS_SUFFIX) OVER (PARTITION BY TRANS_DATE), 1, NULL, 2, ' - '||TRANS_SUFFIX) AS TRANS_DATE,
HOUR_1, HOUR_2, /*...*/ HOUR_24
FROM readings
PIVOT (MAX(READING) FOR TRANS_HOUR IN (1 AS HOUR_1, 2 AS HOUR_2, /*...*/ 24 AS HOUR_24));
This would result in the following results (Sorry, I can't get dbfiddle to work):
TRANS_DATE
HOUR_1
HOUR_2
HOUR_24
01-JAN-21
100
100
-
07-NOV-21 - 1
200
500
-
07-NOV-21 - 2
300
350
-

Related

Oracle SQL - return the date record when there is no count result

I have the tables below and I need my query to bring me the amount of operations grouped by date.
For the dates on which there will be no operations, I need to return the date anyway with the zero count.
Kind like that:
OPERATION_DATE | COUNT_OPERATION | COUNT_OPERATION2 |
04/06/2019 | 453 | 81 |
05/06/2019 | 0 | 0 |
-- QUERY I TRIED
SELECT
T1.DATE_OPERATION AS DATE_OPERATION,
NVL(T1.COUNT_OPERATION, '0') COUNT_OPERATION,
NVL(T1.COUNT_OPERATION2, '0') COUNT_OPERATIONX,
FROM
(
SELECT
trunc(t.DATE_OPERATION) as DATE_OPERATION,
count(t.ID_OPERATION) AS COUNT_OPERATION,
COUNT(CASE WHEN O.OPERATION_TYPE = 'X' THEN 1 END) COUNT_OPERATIONX,
from OPERATION o
left join OPERATION_TYPE ot on ot.id_operation = o.id_operation
where ot.OPERATION_TYPE in ('X', 'W', 'Z', 'I', 'J', 'V')
and TRUNC(t.DATE_OPERATION) >= to_date('01/06/2019', 'DD-MM-YYYY')
group by trunc(t.DATE_OPERATION)
) T1
-- TABLES
CREATE TABLE OPERATION
( ID_OPERATION NUMBER NOT NULL,
DATE_OPERATION DATE NOT NULL,
VALUE NUMBER NOT NULL )
CREATE TABLE OPERATION_TYPE
( ID_OPERATION NUMBER NOT NULL,
OPERATION_TYPE VARCHAR2(1) NOT NULL,
VALUE NUMBER NOT NULL)
I guess that it is a calendar you need, i.e. a table which contains all dates involved. Otherwise, how can you display something that doesn't exist?
This is what you currently have (I'm using only the operation table; add another one yourself):
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 )
9 select o.date_operation,
10 count(o.id_operation)
11 from operation o
12 group by o.date_operation
13 order by o.date_operation;
DATE_OPERA COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
03/06/2019 1
04/06/2019 1
SQL>
As there are no rows that belong to 02/06/2019, query can't return anything (you already know that).
Therefore, add a calendar. If you already have that table, fine - use it. If not, create one. It is a hierarchical query which adds level to a certain date. I'm using 01/06/2019 as the starting point, creating 5 days (note the connect by clause).
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 ),
9 dates (datum) as --> this is a calendar
10 (select date '2019-06-01' + level - 1
11 from dual
12 connect by level <= 5
13 )
14 select d.datum,
15 count(o.id_operation)
16 from operation o full outer join dates d on d.datum = o.date_operation
17 group by d.datum
18 order by d.datum;
DATUM COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
02/06/2019 0 --> missing in source table
03/06/2019 1
04/06/2019 1
05/06/2019 0 --> missing in source table
SQL>
Probably a better option is to dynamically create a calendar so that it doesn't depend on any hardcoded values, but uses the min(date_operation) to max(date_operation) time span. Here we go:
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 ),
9 dates (datum) as --> this is a calendar
10 (select x.min_datum + level - 1
11 from (select min(o.date_operation) min_datum,
12 max(o.date_operation) max_datum
13 from operation o
14 ) x
15 connect by level <= x.max_datum - x.min_datum + 1
16 )
17 select d.datum,
18 count(o.id_operation)
19 from operation o full outer join dates d on d.datum = o.date_operation
20 group by d.datum
21 order by d.datum;
DATUM COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
02/06/2019 0 --> missing in source table
03/06/2019 1
04/06/2019 1
SQL>

Oracle : Get average count for last 30 business days

Oracle version 11g.
My table has records similar to these.
calendar_date ID record_count
25-OCT-2017 1 20
25-OCT-2017 2 40
25-OCT-2017 3 60
24-OCT-2017 1 70
24-OCT-2017 2 50
24-OCT-2017 3 10
20-OCT-2017 1 35
20-OCT-2017 2 60
20-OCT-2017 3 90
18-OCT-2017 1 80
18-OCT-2017 2 50
18-OCT-2017 3 45
i.e for each ID, there is one record count for a given calendar day. The days are NOT continuous, i.e there may be missing records for weekends/holidays etc. On such days, there will not be records available for any ID. However on working days there are entries available for each ID .
I need to get the average record count for last 30 business days for each id
I want an output like this. ( Don't go by the values. It is just a sample )
ID avg_count_last_30
1 150
2 130
3 110
I am trying to figure out the most efficient way to do this. I thought of using RANGE BETWEEN , ROWS BETWEEN etc , but unsure it would work.
Off course a query like this won't help as there are holidays in between.
select id, AVG(record_count) FROM mytable
where calendar_date between SYSDATE - 30 and SYSDATE - 1
group by id;
what I need is something like
select id , AVG(record_count) FROM mytable
where calendar_date between last_30th_business_day and last_business_day
group by id;
last_30th_business_day will be count(DISTINCT business_days ) starting from most recent business day going backwards till I count 30.
last_business_day will be most recent business day
Would like to know experts opinion on this and best approach.
Based on your comment try this one:
WITH mytable (calendar_date, ID, record_count) AS (
SELECT TO_DATE('25-10-2017', 'DD-MM-YYYY'), 1, 20 FROM dual UNION ALL
SELECT TO_DATE('25-10-2017', 'DD-MM-YYYY'), 2, 40 FROM dual UNION ALL
SELECT TO_DATE('25-10-2017', 'DD-MM-YYYY'), 3, 60 FROM dual UNION ALL
SELECT TO_DATE('24-10-2017', 'DD-MM-YYYY'), 1, 70 FROM dual UNION ALL
SELECT TO_DATE('24-10-2017', 'DD-MM-YYYY'), 2, 50 FROM dual UNION ALL
SELECT TO_DATE('24-10-2017', 'DD-MM-YYYY'), 3, 10 FROM dual UNION ALL
SELECT TO_DATE('20-10-2017', 'DD-MM-YYYY'), 1, 35 FROM dual UNION ALL
SELECT TO_DATE('20-10-2017', 'DD-MM-YYYY'), 2, 60 FROM dual UNION ALL
SELECT TO_DATE('20-10-2017', 'DD-MM-YYYY'), 3, 90 FROM dual UNION ALL
SELECT TO_DATE('18-10-2017', 'DD-MM-YYYY'), 1, 80 FROM dual UNION ALL
SELECT TO_DATE('18-10-2017', 'DD-MM-YYYY'), 2, 50 FROM dual UNION ALL
SELECT TO_DATE('18-10-2017', 'DD-MM-YYYY'), 3, 45 FROM dual),
t AS (
SELECT calendar_date, ID, record_count,
ROW_NUMBER() OVER (PARTITION BY ID ORDER BY calendar_date desc) AS RN
FROM mytable)
SELECT ID, AVG(RECORD_COUNT)
FROM t
WHERE rn <= 30
group by ID;

How can update a column based on the value of another column in SQL?

Basically I have Product table like this:
date price
--------- -----
02-SEP-14 50
03-SEP-14 60
04-SEP-14 60
05-SEP-14 60
07-SEP-14 71
08-SEP-14 45
09-SEP-14 45
10-SEP-14 24
11-SEP-14 60
I need to update the table in this form
date price id
--------- ----- --
02-SEP-14 50 1
03-SEP-14 60 2
04-SEP-14 60 2
05-SEP-14 60 2
07-SEP-14 71 3
08-SEP-14 45 4
09-SEP-14 45 4
10-SEP-14 24 5
11-SEP-14 60 6
What I have tried:
CREATE SEQUENCE user_id_seq
START WITH 1
INCREMENT BY 1
CACHE 20;
ALTER TABLE Product
ADD (ID number);
UPDATE Product SET ID = user_id_seq.nextval;
This is updating the ID in the usual way like 1,2,3,4,5..
I have no idea how to do it using basic SQL commands. Please suggest how can I make it. Thank you in advance.
Here is one way to create a view from your base data. I assume you have more than one product (identified by product id), and that the price dates aren't necessarily consecutive. The sequence is separate for each product id. (Also, product should be the name of a different table - where the product id is primary key, and you have other information such as product name, category, etc. The table in your post would be more properly called something like price_history.)
alter session set nls_date_format='dd-MON-rr';
create table product ( prod_id number, dt date, price number );
insert into product ( prod_id, dt, price )
select 101, '02-SEP-14', 50 from dual union all
select 101, '03-SEP-14', 60 from dual union all
select 101, '04-SEP-14', 60 from dual union all
select 101, '05-SEP-14', 60 from dual union all
select 101, '07-SEP-14', 71 from dual union all
select 101, '08-SEP-14', 45 from dual union all
select 101, '09-SEP-14', 45 from dual union all
select 101, '10-SEP-14', 24 from dual union all
select 101, '11-SEP-14', 60 from dual union all
select 102, '02-SEP-14', 45 from dual union all
select 102, '04-SEP-14', 45 from dual union all
select 102, '05-SEP-14', 60 from dual union all
select 102, '06-SEP-14', 50 from dual union all
select 102, '09-SEP-14', 60 from dual
;
commit;
create view product_vw ( prod_id, dt, price, seq ) as
select prod_id, dt, price,
count(flag) over (partition by prod_id order by dt)
from ( select prod_id, dt, price,
case when price = lag(price) over (partition by prod_id order by dt)
then null else 1 end as flag
from product
)
;
Now check what the view looks like:
select * from product_vw;
PROD_ID DT PRICE SEQ
------- ------------------- ---------- ----------
101 02/09/0014 00:00:00 50 1
101 03/09/0014 00:00:00 60 2
101 04/09/0014 00:00:00 60 2
101 05/09/0014 00:00:00 60 2
101 07/09/0014 00:00:00 71 3
101 08/09/0014 00:00:00 45 4
101 09/09/0014 00:00:00 45 4
101 10/09/0014 00:00:00 24 5
101 11/09/0014 00:00:00 60 6
102 02/09/0014 00:00:00 45 1
102 04/09/0014 00:00:00 45 1
102 05/09/0014 00:00:00 60 2
102 06/09/0014 00:00:00 50 3
102 09/09/0014 00:00:00 60 4
NOTE: This answers the question that was originally asked. The OP changed the data.
If your data is not too large, you can use a correlated subquery:
update product p
set id = (select count(distinct p2.price)
from product p2
where p2.date <= p.date
);
If your data is larger, then merge is more appropriate.
WITH cts AS
(
SELECT row_number() over (partition by price order by price ) as id
,date
,price
FROM Product
)
UPDATE p
set p.id = cts.id
from product p join cts on cts.id = p.id
This is the best way by which you try to do.
There is no another simple way to do this using simple statements

sqHow to get sum values weekly based between two dates?

My table values like ...
Date Amt Cash Money Name
15-Jun 100 10 20 GUL
16-Jun 200 20 40 ABC
20-Jun 300 30 60 GUL
25-Jun 400 40 80 BCA
28-Jun 500 50 10 GUL
3-Jul 600 60 120 ABC
19-Jun 700 70 140 BCA
26-Jun 800 80 160 ABC
7-Jul 900 90 180 GUL
9-Jul 1000 100 200 ABC
I need to return weekly based sum of values between two date in oracle .My expected output.
Date Amt Cash Mony
13 to 19 June 1000 100 200
20 to 26 June 1500 150 300
27 to3 July 1100 110 130
4 to 10 July 1900 190 380
you can achieve this by a case statement:
e.g.
-- test data
with data(dat,
val1,
val2) as
(select sysdate - 7, 12, 13
from dual
union all
select sysdate - 6, 32, 1
from dual
union all
select sysdate - 5, 52, 53
from dual
union all
select sysdate - 4, 2, 16
from dual
union all
select sysdate - 3, 72, 154
from dual)
select -- build up your groups
case
when d.dat < to_date('28.09.2016', 'DD.MM.YYYY') then
'<28.09.'
when d.dat > to_date('30.09.2016', 'DD.MM.YYYY') then
'>30.09.'
else
'28.-30.'
end as grp,
sum(val1),
sum(val2)
from data d
group by case
when d.dat < to_date('28.09.2016', 'DD.MM.YYYY') then
'<28.09.'
when d.dat > to_date('30.09.2016', 'DD.MM.YYYY') then
'>30.09.'
else
'28.-30.'
end;
-- output
grp sum(val1) sum(val2)
28.-30. 84 54
<28.09. 12 13
>30.09. 74 170
To group by calendar week use
-- test data
with data(dat,
val1,
val2) as
(select sysdate - 9, 12, 13
from dual
union all
select sysdate - 6, 32, 1
from dual
union all
select sysdate - 5, 52, 53
from dual
union all
select sysdate - 4, 2, 16
from dual
union all
select sysdate + 3, 72, 154
from dual)
select TRUNC(dat, 'iw') ||'-'|| TRUNC(dat+7, 'iw'),
sum(val1),
sum(val2)
from data
group by TRUNC(dat, 'iw') ||'-'|| TRUNC(dat+7, 'iw');
The query below has the input dates (from and to) in the first factored subquery. Those can be made into bind variables, or whatever mechanism you want to use to pass these inputs to the query. Then I have the test data in the second factored subquery; you don't need that in your final solution. I create all the needed weeks in the "weeks" factored subquery and I use a left outer join, so that weeks with no transactions will show 0 sums. Note that in the main query, where I do a join, the "date" column from the base table is not enclosed within any kind of function; this allows the use of an index on that column, which you should have if the table is very large, or if performance may be a concern for any other reason. Note that the output is different from yours (missing the last row) because I input a to-date before the last row in the table. That is intentional, I wanted to make sure the query works correctly. Also: I didn't use "date" or "week" as column names; that is a very poor practice. Reserved Oracle keywords should not be used as column names. I used "dt" and "wk" instead.
with
user_inputs ( from_dt, to_dt ) as (
select to_date('4-Jun-2016', 'dd-Mon-yyyy'), to_date('3-Jul-2016', 'dd-Mon-yyyy') from dual
),
test_data ( dt, amt, cash, money, name ) as (
select to_date('15-Jun-2016', 'dd-Mon-yyyy'), 100, 10, 20, 'GUL' from dual union all
select to_date('16-Jun-2016', 'dd-Mon-yyyy'), 200, 20, 40, 'ABC' from dual union all
select to_date('20-Jun-2016', 'dd-Mon-yyyy'), 300, 30, 60, 'GUL' from dual union all
select to_date('25-Jun-2016', 'dd-Mon-yyyy'), 400, 40, 80, 'BCA' from dual union all
select to_date('28-Jun-2016', 'dd-Mon-yyyy'), 500, 50, 10, 'GUL' from dual union all
select to_date( '3-Jul-2016', 'dd-Mon-yyyy'), 600, 60, 120, 'ABC' from dual union all
select to_date('19-Jun-2016', 'dd-Mon-yyyy'), 700, 70, 140, 'BCA' from dual union all
select to_date('26-Jun-2016', 'dd-Mon-yyyy'), 800, 80, 160, 'ABC' from dual union all
select to_date( '7-Jul-2016', 'dd-Mon-yyyy'), 900, 90, 180, 'GUL' from dual union all
select to_date( '9-Jul-2016', 'dd-Mon-yyyy'), 1000, 100, 200, 'ABC' from dual
),
weeks ( start_dt ) as (
select trunc(from_dt, 'iw') + 7 * (level - 1)
from user_inputs
connect by level <= 1 + (to_dt - trunc(from_dt, 'iw')) / 7
)
select to_char(w.start_dt, 'dd-Mon-yyyy') || ' - ' ||
to_char(w.start_dt + 6, 'dd-Mon-yyyy') as wk,
nvl(sum(t.amt), 0) as tot_amt, nvl(sum(t.cash), 0) as tot_cash,
nvl(sum(t.money), 0) as tot_money
from weeks w left outer join test_data t
on t.dt >= w.start_dt and t.dt < w.start_dt + 7
group by start_dt
order by start_dt
;
Output:
WK TOT_AMT TOT_CASH TOT_MONEY
-------------------------------------------- ---------- ---------- ----------
30-May-2016 - 05-Jun-2016 0 0 0
06-Jun-2016 - 12-Jun-2016 0 0 0
13-Jun-2016 - 19-Jun-2016 1000 100 200
20-Jun-2016 - 26-Jun-2016 1500 150 300
27-Jun-2016 - 03-Jul-2016 1100 110 130
You can try like below, I chose 13-Jun-2016 as a starting date. You can chose it as per your requirement upto any range of dates.
with t as
(select dt,
min(dt) over (partition by week)||' to '|| max(dt) over (partition by week) week
from (
select to_date('13-Jun-2016','dd-Mon-yyyy')+(level-1) dt,
ceil(level/7) week
from dual
connect by level<=52))
select week,
sum(amt),
sum(cash),
sum(money)
from (
select your_table.*,
t.week
from your_table,t
where trunc(to_date(your_table.dt,'dd-Mon-yyyy'))=trunc(t.dt))
group by week;

Finding dates when accounts reach zero

Thanks for taking the time to examine my issue.
I'm trying to figure out a way to return dates when an account reaches 0
Sample data:
DATE ACCOUNT AMOUNT
11/01 001 100
11/02 002 50
11/03 001 -100
11/07 001 20
11/15 002 -50
11/20 001 -20
Wanted results:
Account ZeroDate
001 11/03
002 11/15
001 11/20
So far I haven't been able to figure out anything that works. Might you be able to point me in the right direction?
Thanks again in advance!
You can use analytic functions to compute the running balance
SQL> ed
Wrote file afiedt.buf
1 with x as (
2 select date '2011-11-01' dt, 1 account, 100 amt from dual union all
3 select date '2011-11-02', 2, 50 from dual union all
4 select date '2011-11-03', 1, -100 from dual union all
5 select date '2011-11-07', 1, 20 from dual union all
6 select date '2011-11-15', 2, -50 from dual union all
7 select date '2011-11-20', 1, -20 from dual
8 )
9 select dt,
10 account,
11 amt,
12 sum(amt) over (partition by account order by dt) current_balance
13* from x
SQL> /
DT ACCOUNT AMT CURRENT_BALANCE
--------- ---------- ---------- ---------------
01-NOV-11 1 100 100
03-NOV-11 1 -100 0
07-NOV-11 1 20 20
20-NOV-11 1 -20 0
02-NOV-11 2 50 50
15-NOV-11 2 -50 0
6 rows selected.
and then use the running balance to find the zero dates.
SQL> ed
Wrote file afiedt.buf
1 with x as (
2 select date '2011-11-01' dt, 1 account, 100 amt from dual union all
3 select date '2011-11-02', 2, 50 from dual union all
4 select date '2011-11-03', 1, -100 from dual union all
5 select date '2011-11-07', 1, 20 from dual union all
6 select date '2011-11-15', 2, -50 from dual union all
7 select date '2011-11-20', 1, -20 from dual
8 )
9 select account,
10 dt zero_date
11 from (
12 select dt,
13 account,
14 amt,
15 sum(amt) over (partition by account order by dt) current_balance
16 from x
17 )
18* where current_balance = 0
SQL> /
ACCOUNT ZERO_DATE
---------- ---------
1 03-NOV-11
1 20-NOV-11
2 15-NOV-11
create table myacct (dt varchar2(5)
, account varchar2(3)
, amount number
)
;
insert into myacct values ('11/01', '001', 100);
insert into myacct values ('11/02', '002', 50);
insert into myacct values ('11/03', '001', -100);
insert into myacct values ('11/07', '001', 20);
insert into myacct values ('11/15', '002', -50);
insert into myacct values ('11/20', '001', -20);
commit;
/* results wanted:
Account ZeroDate
001 11/03
002 11/15
001 11/20 */
select account "Account", dt "ZeroDate"
from myacct
where amount <= 0
;
/* results from above query:
Account ZeroDate
001 11/03
002 11/15
001 11/20
*/