I have a dataset within a date range which has three columns, Product_type, date and metric. For a given product_type, data is not available for all days. For the missing rows, we would like to do a forward date fill for next n days using the last value of the metric.
Product_type
date
metric
A
2019-10-01
10
A
2019-10-02
12
A
2019-10-03
15
A
2019-10-04
5
A
2019-10-05
5
A
2019-10-06
5
A
2019-10-16
12
A
2019-10-17
23
A
2019-10-18
34
Here, the data from 2019-10-04 to 2019-10-06, has been forward filled. There might be bigger gaps in the dates, but we only want to fill the first n days.
Here, n=2, so rows 5 and 6 has been forward filled.
I am not sure how to implement this logic in SQL.
Here's one option. Read comments within code.
Sample data:
SQL> WITH
2 test (product_type, datum, metric)
3 AS
4 (SELECT 'A', DATE '2019-10-01', 10 FROM DUAL
5 UNION ALL
6 SELECT 'A', DATE '2019-10-02', 12 FROM DUAL
7 UNION ALL
8 SELECT 'A', DATE '2019-10-03', 15 FROM DUAL
9 UNION ALL
10 SELECT 'A', DATE '2019-10-04', 5 FROM DUAL
11 UNION ALL
12 SELECT 'A', DATE '2019-10-16', 12 FROM DUAL
13 UNION ALL
14 SELECT 'A', DATE '2019-10-18', 23 FROM DUAL),
Query begins here:
15 temp
16 AS
17 -- CB_FWD_FILL = 1 if difference between two consecutive dates is larger than 1 day
18 -- (i.e. that's the gap to be forward filled)
19 (SELECT product_type,
20 datum,
21 metric,
22 LEAD (datum) OVER (PARTITION BY product_type ORDER BY datum)
23 next_datum,
24 CASE
25 WHEN LEAD (datum)
26 OVER (PARTITION BY product_type ORDER BY datum)
27 - datum >
28 1
29 THEN
30 1
31 ELSE
32 0
33 END
34 cb_fwd_fill
35 FROM test)
36 -- original data from the table
37 SELECT product_type, datum, metric FROM test
38 UNION ALL
39 -- DATUM is the last date which is OK; add LEVEL pseudocolumn to it to fill the gap
40 -- with PAR_N number of rows
41 SELECT product_type, datum + LEVEL, metric
42 FROM (SELECT product_type, datum, metric
43 FROM (-- RN = 1 means that that's the first gap in data set - that's the one
44 -- that has to be forward filled
45 SELECT product_type,
46 datum,
47 metric,
48 ROW_NUMBER ()
49 OVER (PARTITION BY product_type ORDER BY datum) rn
50 FROM temp
51 WHERE cb_fwd_fill = 1)
52 WHERE rn = 1)
53 CONNECT BY LEVEL <= &par_n
54 ORDER BY datum;
Result:
Enter value for par_n: 2
PRODUCT_TYPE DATUM METRIC
--------------- ---------- ----------
A 2019-10-01 10
A 2019-10-02 12
A 2019-10-03 15
A 2019-10-04 5
A 2019-10-05 5 --> newly added
A 2019-10-06 5 --> rows
A 2019-10-16 12
A 2019-10-18 23
8 rows selected.
SQL>
Another solution:
WITH test (product_type, datum, metric) AS
(
SELECT 'A', DATE '2019-10-01', 10 FROM DUAL
UNION ALL
SELECT 'A', DATE '2019-10-02', 12 FROM DUAL
UNION ALL
SELECT 'A', DATE '2019-10-03', 15 FROM DUAL
UNION ALL
SELECT 'A', DATE '2019-10-04', 5 FROM DUAL
UNION ALL
SELECT 'A', DATE '2019-10-16', 12 FROM DUAL
UNION ALL
SELECT 'A', DATE '2019-10-18', 23 FROM DUAL
),
minmax(mindatum, maxdatum) AS (
SELECT MIN(datum), max(datum) from test
),
alldates (datum, product_type) AS
(
SELECT mindatum + level - 1, t.product_type FROM minmax,
(select distinct product_type from test) t
connect by mindatum + level <= (select maxdatum from minmax)
),
grouped as (
select a.datum, a.product_type, t.metric,
count(t.product_type) over(partition by a.product_type order by a.datum) as grp
from alldates a
left join test t on t.datum = a.datum
),
final_table as (
select g.datum, g.product_type, g.grp, g.rn,
last_value(g.metric ignore nulls) over(partition by g.product_type order by g.datum) as metric
from (
select g.*, row_number() over(partition by product_type, grp order by datum) - 1 as rn
from grouped g
) g
)
select datum, product_type, metric
from final_table
where rn <= &par_n
order by datum
;
Related
I have this table:
Site_ID
Volume
RPT_Date
RPT_Hour
1
10
01/01/2021
1
1
7
01/01/2021
2
1
13
01/01/2021
3
1
11
01/16/2021
1
1
3
01/16/2021
2
1
5
01/16/2021
3
2
9
01/01/2021
1
2
24
01/01/2021
2
2
16
01/01/2021
3
2
18
01/16/2021
1
2
7
01/16/2021
2
2
1
01/16/2021
3
I need to select the RPT_Hour with the highest Volume for each set of dates
Needed Output:
Site_ID
Volume
RPT_Date
RPT_Hour
1
13
01/01/2021
1
1
11
01/16/2021
1
2
24
01/01/2021
2
2
18
01/16/2021
1
SELECT site_id, volume, rpt_date, rpt_hour
FROM (SELECT t.*,
ROW_NUMBER()
OVER (PARTITION BY site_id, rpt_date ORDER BY volume DESC) AS rn
FROM MyTable) t
WHERE rn = 1;
I cannot figure out how to group the table into like date groups. If I could do that, I think the rn = 1 will return the highest volume row for each date.
The way I see it, your query is OK (but rpt_hour in desired output is not).
SQL> with test (site_id, volume, rpt_date, rpt_hour) as
2 (select 1, 10, date '2021-01-01', 1 from dual union all
3 select 1, 7, date '2021-01-01', 2 from dual union all
4 select 1, 13, date '2021-01-01', 3 from dual union all
5 select 1, 11, date '2021-01-16', 1 from dual union all
6 select 1, 3, date '2021-01-16', 2 from dual union all
7 select 1, 5, date '2021-01-16', 3 from dual union all
8 --
9 select 2, 9, date '2021-01-01', 1 from dual union all
10 select 2, 24, date '2021-01-01', 3 from dual union all
11 select 2, 16, date '2021-01-01', 3 from dual union all
12 select 2, 18, date '2021-01-16', 1 from dual union all
13 select 2, 7, date '2021-01-16', 2 from dual union all
14 select 2, 1, date '2021-01-16', 3 from dual
15 ),
16 temp as
17 (select t.*,
18 row_number() over (partition by site_id, rpt_date order by volume desc) rn
19 from test t
20 )
21 select site_id, volume, rpt_date, rpt_hour
22 from temp
23 where rn = 1
24 /
SITE_ID VOLUME RPT_DATE RPT_HOUR
---------- ---------- ---------- ----------
1 13 01/01/2021 3
1 11 01/16/2021 1
2 24 01/01/2021 3
2 18 01/16/2021 1
SQL>
One option would be using MAX(..) KEEP (DENSE_RANK ..) OVER (PARTITION BY ..) analytic function without need of any subquery such as :
SELECT DISTINCT
site_id,
MAX(volume) KEEP (DENSE_RANK FIRST ORDER BY volume DESC) OVER
(PARTITION BY site_id, rpt_date) AS volume,
rpt_date,
MAX(rpt_hour) KEEP (DENSE_RANK FIRST ORDER BY volume DESC) OVER
(PARTITION BY site_id, rpt_date) AS rpt_hour
FROM t
GROUP BY site_id, rpt_date, volume, rpt_hour
ORDER BY site_id, rpt_date
Demo
I'm working with Oracle and I have a table with a column of type TIMESTAMP. I was wondering how can I extract the records from last 4 weeks of activity on the database, partitioned by week.
Following rows are inserted on week 1
kc 2 04-10-2021
vc 3 06-10-2021
vk 4 07-10-2021
Following rows are inserted on week2
cv 1 12-10-2021
ck 5 14-10-2021
Following rows are inserted on week3
vv 7 19-10-2021
Following rows are inserted on week4
vx 7 29-10-2021
Table now has
SQL>select * from tab;
NAME VALUE TIMESTAMP
-------------------- ----------
kc 2 04-10-2021
vc 3 06-10-2021
vk 4 07-10-2021
cv 1 12-10-2021
ck 5 14-10-2021
vv 7 19-10-2021
vx 7 29-10-2021
I would like a query which would give me the number of rows added each week, in the last 4 weeks.
This is what I would like to see
numofrows week
--------- -----
3 1
2 2
1 3
1 4
One option is to use to_char function and its iw parameter:
SQL> with test (name, datum) as
2 (select 'kc', date '2021-10-04' from dual union all
3 select 'vc', date '2021-10-06' from dual union all
4 select 'vk', date '2021-10-07' from dual union all
5 select 'cv', date '2021-10-12' from dual union all
6 select 'ck', date '2021-10-14' from dual union all
7 select 'vv', date '2021-10-19' from dual union all
8 select 'vx', DATE '2021-10-29' from dual
9 )
10 select to_char(datum, 'iw') week,
11 count(*)
12 from test
13 where datum >= add_months(sysdate, -1) --> the last month
14 group by to_char(datum, 'iw');
WE COUNT(*)
-- ----------
42 1
43 1
40 3
41 2
SQL>
Line #13: I intentionally used "one month" instead of "4 weeks" as I thought (maybe wrongly) that you, actually, want that (you know, "a month has 4 weeks" - not exactly, but close, sometimes not close enough).
If you want 4 weeks, what is that, then? Sysdate minus 28 days (as every week has 7 days)? Then you'd modify line #13 to
where datum >= trunc(sysdate - 4*7)
Or, maybe it is really the last 4 weeks:
SQL> with test (name, datum) as
2 (select 'kc', date '2021-10-04' from dual union all
3 select 'vc', date '2021-10-06' from dual union all
4 select 'vk', date '2021-10-07' from dual union all
5 select 'cv', date '2021-10-12' from dual union all
6 select 'ck', date '2021-10-14' from dual union all
7 select 'vv', date '2021-10-19' from dual union all
8 select 'vx', DATE '2021-10-29' from dual
9 ),
10 temp as
11 (select to_char(datum, 'iw') week,
12 count(*) cnt,
13 row_number() over (order by to_char(datum, 'iw') desc) rn
14 from test
15 group by to_char(datum, 'iw')
16 )
17 select week, cnt
18 from temp
19 where rn <= 4
20 order by week;
WE CNT
-- ----------
40 3
41 2
42 1
43 1
SQL>
Now you have several options, see which one fits the best (if any).
I "simulated" missing data (see TEST CTE), created a calendar (calend) and ... did the job. Read comments within code:
SQL> with test (name, datum) as
2 -- sample data
3 (select 'vv', date '2021-10-19' from dual union all
4 select 'vx', DATE '2021-10-29' from dual
5 ),
6 calend as
7 -- the last 31 days; 4 weeks are included, obviously
8 (select max_datum - level + 1 datum
9 from (select max(a.datum) max_datum from test a)
10 connect by level <= 31
11 ),
12 joined as
13 -- joined TEST and CALEND data
14 (select to_char(c.datum, 'iw') week,
15 t.name
16 from calend c left join test t on t.datum = c.datum
17 ),
18 last4 as
19 -- last 4 weeks
20 (select week, count(name) cnt,
21 row_number() over (order by week desc) rn
22 from joined
23 group by week
24 )
25 select week, cnt
26 from last4
27 where rn <= 4
28 order by week;
WE CNT
-- ----------
40 0
41 0
42 1
43 1
SQL>
Given a table called Project, I need the list of team_id's who won at least an award every week in last 3 months
launch_date team_id project_name
2019-01-01 123 A
2019-01-01 345 B
2019-01-01 357 C
2019-01-09 123 D
2019-01-08 345 E
2019-01-21 123 F
project_name award
A Y
B N
C Y
D Y
E N
F Y
last 3 months can be achieved with below where condition but how do i split the launch_date into weekly intervals
where launch_date >= sysdate - 90
With the given data, answer should be team id 123
In your sample data, You have only given 21 days of data instead of 3 months.
You can find out the total number of weeks and their week starting date which can then be compared with your table data to check if an award is won by the team for each week as follows:
SQL> --SAMPLE DATA
SQL> with teams (launch_date, team_id, project_name)
2 as
3 (SELECT DATE'2019-01-01', 123, 'A' FROM DUAL UNION ALL
4 SELECT DATE'2019-01-01', 345, 'B' FROM DUAL UNION ALL
5 SELECT DATE'2019-01-01', 357, 'C' FROM DUAL UNION ALL
6 SELECT DATE'2019-01-09', 123, 'D' FROM DUAL UNION ALL
7 SELECT DATE'2019-01-08', 345, 'E' FROM DUAL UNION ALL
8 SELECT DATE'2019-01-21', 123, 'F' FROM DUAL),
9 AWARDS(project_name, award)
10 AS
11 (SELECT 'A','Y' FROM DUAL UNION ALL
12 SELECT 'B','N' FROM DUAL UNION ALL
13 SELECT 'C','Y' FROM DUAL UNION ALL
14 SELECT 'D','Y' FROM DUAL UNION ALL
15 SELECT 'E','N' FROM DUAL UNION ALL
16 SELECT 'F','Y' FROM DUAL),
17 -- YOUR QUERY START FROM HERE
18 -- WITH
19 WKS(DT) AS
20 (SELECT DISTINCT TRUNC(DATE '2019-01-21' - LEVEL + 1, 'W')
21 FROM DUAL CONNECT BY LEVEL <= 21
22 )
23 SELECT T.TEAM_ID
24 FROM WKS W
25 LEFT JOIN TEAMS T ON W.DT = TRUNC(T.LAUNCH_DATE, 'W')
26 LEFT JOIN AWARDS A ON A.PROJECT_NAME = T.PROJECT_NAME
27 WHERE A.AWARD = 'Y'
28 GROUP BY T.TEAM_ID
29 HAVING COUNT(1) = ( SELECT COUNT(1) FROM WKS);
TEAM_ID
----------
123
SQL>
In WKS cte for 3 months data, You need to replace the
WKS(DT) AS
(SELECT DISTINCT TRUNC(DATE '2019-01-21' - LEVEL + 1, 'W')
FROM DUAL CONNECT BY LEVEL <= 21
)
with
WKS(DT) AS
( SELECT DISTINCT TRUNC(sysdate - LEVEL + 1, 'W')
FROM DUAL CONNECT BY LEVEL <= trunc(sysdate) - add_months(trunc(sysdate), -3
)
I'm asking ur help
here this is my set
ID date_answered
---------- --------------
1 16/09/19
2 16/09/19
3 16/09/19
4 16/09/19
5 16/09/19
6 16/09/19
7 16/09/19
8 16/09/19
9 16/09/19
10 17/09/19
11 17/09/19
12 17/09/19
13 18/09/19
14 18/09/19
15 18/09/19
16 18/09/19
17 19/09/19
18 19/09/19
19 19/09/19
20 19/09/19
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
as you can see :
16/09/2019 there are 9 people who answered
17/09/2019 there are 7 people who answered
18/09/2019 there are 4 people who answered
19/09/2019 there are 4 people who answered
there are still 20 people who didnt answer
to calculate how many people answered per day, i have done :
nb_answered = count(id) over (partition by date_answered order by date_answered)
now my problem is there, i'm trying to get that :
date_answered nb_answered nb_left
--------------- -------------- --------
16/09/2019 9 40
17/09/2019 7 31(40-9)
18/09/2019 4 24(31-7)
19/09/2019 4 20(24-4)
i have tried :
count(id) over (order by date_complete rows between UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) which give me 40 (total person).
it's cool for the first date, but when i move to the second date i dont know how to have 31.
How can I do that: every day I remove from the total, the number that has already answered
Do you have any suggestion ?
Another option might be correlated subquery in SELECT statement.
Example is a little bit simplified (didn't feel like typing that much).
SQL> with test (id, da) as
2 (select 1, 16092019 from dual union all
3 select 2, 16092019 from dual union all
4 select 3, 16092019 from dual union all
5 select 4, 16092019 from dual union all
6 select 5, 16092019 from dual union all
7 --
8 select 6, 17092019 from dual union all
9 select 7, 17092019 from dual union all
10 select 8, 17092019 from dual union all
11 --
12 select 9, 19092019 from dual union all
13 --
14 select 10, null from dual union all
15 select 11, null from dual union all
16 select 12, null from dual union all
17 select 13, null from dual
18 )
19 select a.da date_answered,
20 count(a.id) nb_answered,
21 (select count(*) from test b
22 where b.da >= a.da
23 or b.da is null
24 ) nb_left
25 from test a
26 group by a.da
27 order by a.da;
DATE_ANSWERED NB_ANSWERED NB_LEFT
------------- ----------- ----------
16092019 5 13
17092019 3 8
19092019 1 5
4 4
SQL>
You want to subtract the overall count from the cumulative count:
select date_answered, count(*) as answered_on_date,
( count(*) over () -
sum(count(*)) over (order by date_answered nulls last)
) as remaining
from t
group by date_answered
order by date_answered;
If you don't want to include the current date, then subtract that as well:
select date_answered, count(*) as answered_on_date,
( count(*) over () -
sum(count(*)) over (order by date_answered nulls last) -
count(*)
) as remaining
from t
group by date_answered
order by date_answered;
I have the tables below and I need my query to bring me the amount of operations grouped by date.
For the dates on which there will be no operations, I need to return the date anyway with the zero count.
Kind like that:
OPERATION_DATE | COUNT_OPERATION | COUNT_OPERATION2 |
04/06/2019 | 453 | 81 |
05/06/2019 | 0 | 0 |
-- QUERY I TRIED
SELECT
T1.DATE_OPERATION AS DATE_OPERATION,
NVL(T1.COUNT_OPERATION, '0') COUNT_OPERATION,
NVL(T1.COUNT_OPERATION2, '0') COUNT_OPERATIONX,
FROM
(
SELECT
trunc(t.DATE_OPERATION) as DATE_OPERATION,
count(t.ID_OPERATION) AS COUNT_OPERATION,
COUNT(CASE WHEN O.OPERATION_TYPE = 'X' THEN 1 END) COUNT_OPERATIONX,
from OPERATION o
left join OPERATION_TYPE ot on ot.id_operation = o.id_operation
where ot.OPERATION_TYPE in ('X', 'W', 'Z', 'I', 'J', 'V')
and TRUNC(t.DATE_OPERATION) >= to_date('01/06/2019', 'DD-MM-YYYY')
group by trunc(t.DATE_OPERATION)
) T1
-- TABLES
CREATE TABLE OPERATION
( ID_OPERATION NUMBER NOT NULL,
DATE_OPERATION DATE NOT NULL,
VALUE NUMBER NOT NULL )
CREATE TABLE OPERATION_TYPE
( ID_OPERATION NUMBER NOT NULL,
OPERATION_TYPE VARCHAR2(1) NOT NULL,
VALUE NUMBER NOT NULL)
I guess that it is a calendar you need, i.e. a table which contains all dates involved. Otherwise, how can you display something that doesn't exist?
This is what you currently have (I'm using only the operation table; add another one yourself):
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 )
9 select o.date_operation,
10 count(o.id_operation)
11 from operation o
12 group by o.date_operation
13 order by o.date_operation;
DATE_OPERA COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
03/06/2019 1
04/06/2019 1
SQL>
As there are no rows that belong to 02/06/2019, query can't return anything (you already know that).
Therefore, add a calendar. If you already have that table, fine - use it. If not, create one. It is a hierarchical query which adds level to a certain date. I'm using 01/06/2019 as the starting point, creating 5 days (note the connect by clause).
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 ),
9 dates (datum) as --> this is a calendar
10 (select date '2019-06-01' + level - 1
11 from dual
12 connect by level <= 5
13 )
14 select d.datum,
15 count(o.id_operation)
16 from operation o full outer join dates d on d.datum = o.date_operation
17 group by d.datum
18 order by d.datum;
DATUM COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
02/06/2019 0 --> missing in source table
03/06/2019 1
04/06/2019 1
05/06/2019 0 --> missing in source table
SQL>
Probably a better option is to dynamically create a calendar so that it doesn't depend on any hardcoded values, but uses the min(date_operation) to max(date_operation) time span. Here we go:
SQL> with
2 operation (id_operation, date_operation, value) as
3 (select 1, date '2019-06-01', 100 from dual union all
4 select 2, date '2019-06-01', 200 from dual union all
5 -- 02/06/2019 is missing
6 select 3, date '2019-06-03', 300 from dual union all
7 select 4, date '2019-06-04', 400 from dual
8 ),
9 dates (datum) as --> this is a calendar
10 (select x.min_datum + level - 1
11 from (select min(o.date_operation) min_datum,
12 max(o.date_operation) max_datum
13 from operation o
14 ) x
15 connect by level <= x.max_datum - x.min_datum + 1
16 )
17 select d.datum,
18 count(o.id_operation)
19 from operation o full outer join dates d on d.datum = o.date_operation
20 group by d.datum
21 order by d.datum;
DATUM COUNT(O.ID_OPERATION)
---------- ---------------------
01/06/2019 2
02/06/2019 0 --> missing in source table
03/06/2019 1
04/06/2019 1
SQL>