For example, I am having a table name test_cross_months and the data is as below :
id
start_date
end_date
44
2020-01-04
2020-01-04
44
2020-01-30
2020-02-10
44
2020-02-27
2020-03-03
Expected result:
id
start_date
end_date
44
2020-01-04
2020-01-04
44
2020-01-30
2020-01-31
44
2020-02-01
2020-02-10
44
2020-02-27
2020-02-29
44
2020-03-01
2020-03-03
So for
|44|2020-01-30 |2020-02-10|
there should be two rows that are from 30-Jan-2020 to 31-Jan-2020 and 1-Feb-2020 to 10-Feb-2020
I tried by comparing the end date with the last day for the start_date but facing issues as a new row is not getting created for the end_date range.
Could any please suggest a solution?
You can use a recursive query (which will work regardless of how many months your ranges span):
WITH months ( id, start_date, end_date, final_date ) AS (
SELECT id,
start_date,
LEAST( LAST_DAY( start_date ), end_date ),
end_date
FROM table_name
UNION ALL
SELECT id,
end_date + INTERVAL '1' DAY,
LEAST( ADD_MONTHS( end_date, 1 ), final_date ),
final_date
FROM months
WHERE end_date < final_date
)
SEARCH DEPTH FIRST BY final_date SET dt_order
SELECT id,
start_date,
end_date
FROM months;
Which, for the sample data:
CREATE TABLE table_name (id, start_date, end_date) AS
SELECT 44, DATE '2020-01-04', DATE '2020-01-04' FROM DUAL UNION ALL
SELECT 44, DATE '2020-01-30', DATE '2020-02-10' FROM DUAL UNION ALL
SELECT 44, DATE '2020-02-27', DATE '2020-03-03' FROM DUAL;
Outputs:
ID
START_DATE
END_DATE
44
2020-01-04 00:00:00
2020-01-04 00:00:00
44
2020-01-30 00:00:00
2020-01-31 00:00:00
44
2020-02-01 00:00:00
2020-02-10 00:00:00
44
2020-02-27 00:00:00
2020-02-29 00:00:00
44
2020-03-01 00:00:00
2020-03-03 00:00:00
db<>fiddle here
Using a table of numbers and date arithmetic
-- example of table of numbers
with nmbrs(n) as(
select 0 from dual union all
select 1 from dual union all
select 2 from dual
)
select t.id,
case when n=0 then t.start_date else trunc(t.start_date, 'MM') + NUMTOYMINTERVAL(n, 'MONTH') end s,
case when n=MONTHS_BETWEEN(last_day(t.end_date), last_day(t.start_date)) then t.end_date
else last_day(t.start_date + NUMTOYMINTERVAL(n, 'MONTH')) end e
from test_cross_months t
join nmbrs on nmbrs.n <= MONTHS_BETWEEN(last_day(t.end_date), last_day(t.start_date))
order by t.id, s
db<>fiddle
Related
I have a requirement to fetch value based on eff_dt and end date. given below sample data.
Database : Oracle 11g
Example data:
id
val
eff_date
end_date
10
100
01-Jan-21
04-Jan-21
10
105
05-Jan-21
07-Jan-21
10
100
08-Jan-21
10-Jan-21
10
100
11-Jan-21
17-Jan-21
10
100
18-Jan-21
21-Jan-21
10
110
22-Jan-21
null
output:
id
val
eff_date
end_date
10
100
01-Jan-21
04-Jan-21
10
105
05-Jan-21
07-Jan-21
10
100
08-Jan-21
21-Jan-21
10
110
22-Jan-21
null
You can use the ROW_NUMBER analytic function and then aggregate:
SELECT id,
val,
MIN(eff_date) AS eff_date,
MAX(end_date) AS end_date
FROM (
SELECT t.*,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY eff_date)
- ROW_NUMBER() OVER (PARTITION BY id, val ORDER BY eff_date) AS grp
FROM table_name t
)
GROUP BY id, val, grp
ORDER BY id, eff_date;
Which, for the sample data:
CREATE TABLE table_name (id, val, eff_date, end_date) AS
SELECT 10, 100, DATE '2021-01-01', DATE '2021-01-04' FROM DUAL UNION ALL
SELECT 10, 105, DATE '2021-01-05', DATE '2021-01-07' FROM DUAL UNION ALL
SELECT 10, 100, DATE '2021-01-08', DATE '2021-01-10' FROM DUAL UNION ALL
SELECT 10, 100, DATE '2021-01-11', DATE '2021-01-17' FROM DUAL UNION ALL
SELECT 10, 100, DATE '2021-01-18', DATE '2021-01-21' FROM DUAL UNION ALL
SELECT 10, 110, DATE '2021-01-22', null FROM DUAL;
Outputs:
ID
VAL
EFF_DATE
END_DATE
10
100
2021-01-01 00:00:00
2021-01-04 00:00:00
10
105
2021-01-05 00:00:00
2021-01-07 00:00:00
10
100
2021-01-08 00:00:00
2021-01-21 00:00:00
10
110
2021-01-22 00:00:00
null
From Oracle 12, you can use MATCH_RECOGNIZE to perform row-by-row processing:
SELECT *
FROM table_name t
MATCH_RECOGNIZE(
PARTITION BY id
ORDER BY eff_date
MEASURES
FIRST(val) AS val,
FIRST(eff_date) AS eff_date,
LAST(end_date) AS end_date
PATTERN (same_val+)
DEFINE same_val AS FIRST(val) = val
)
Which has the same output and is likely to be more efficient.
fiddle
I have a problem with fetching few exceptions from DB.
Example, table b:
sn
v_num
start_date
end_date
1
001
01-01-2019
31-12-2099
1
002
01-01-2021
31-01-2022
1
003
01-02-2022
31-12-2099
2
001
01-01-2022
31-12-2099
2
002
01-07-2022
31-07-2022
2
003
01-08-2022
31-12-2099
Expected output:
sn
v_num
start_date
end_date
1
003
01-02-2022
31-12-2099
2
001
01-01-2022
31-12-2099
Currently I'm here:
SELECT * FROM table a, table b
WHERE a.sn = b.sn
AND b.v_num = (SELECT max (v_num) FROM b WHERE a.sn = b.sn)
but obviously that is not good because of a few cases like this with sn = 2.
Conclusion, I need to get unique sn record where v_num is max (95% of them in DB) except in case if start_date of max v_num record is > today.
Filter using start_date <= TRUNC(SYSDATE) then use the ROW_NUMBER analytic function:
SELECT *
FROM (
SELECT a.*,
ROW_NUMBER() OVER (PARTITION BY sn ORDER BY v_num DESC) AS rn
FROM "TABLE" a
WHERE start_date <= TRUNC(SYSDATE)
)
WHERE rn = 1;
If the start_date has a time component then you can use start_date < TRUNC(SYSDATE) + INTERVAL '1' DAY to get all the values for today from 00:00:00 to 23:59:59.
If you can have ties for the maximum and want to return all the ties then you can use the RANK analytic function instead of ROW_NUMBER.
Which, for the sample data:
CREATE TABLE "TABLE" (sn, v_num, start_date, end_date) AS
SELECT 1, '001', DATE '2022-01-01', DATE '2099-12-31' FROM DUAL UNION ALL
SELECT 1, '002', DATE '2022-01-01', DATE '2022-01-31' FROM DUAL UNION ALL
SELECT 1, '003', DATE '2022-02-01', DATE '2099-12-31' FROM DUAL UNION ALL
SELECT 2, '001', DATE '2022-01-01', DATE '2099-12-31' FROM DUAL UNION ALL
SELECT 2, '002', DATE '2022-07-01', DATE '2022-07-31' FROM DUAL UNION ALL
SELECT 2, '003', DATE '2022-08-01', DATE '2099-12-31' FROM DUAL;
Outputs:
SN
V_NUM
START_DATE
END_DATE
RN
1
003
2022-02-01 00:00:00
2099-12-31 00:00:00
1
2
001
2022-01-01 00:00:00
2099-12-31 00:00:00
1
db<>fiddle here
I have a table from which I am trying to return the quantity per day that the article was in the system.
Example is in table Bestand the are multiple palletes of a different articles that each have a Booking In and Out date; I am try to find out the Min and Max amount of stock that was in the system per article and month.
My thinking is that if I can return the stock quantity for each day and then read out the Min and Max values.
The Timespan would be set at the time of running the SQL and the articles would be fixed.
To find out the quantity for each day I have used the following SQL:
SELECT DISTINCT
a.artbez1 AS Artikelbezeichnung,
b.artikelnr AS Artikelnummer,
SUM(CASE WHEN TO_DATE('2019-11-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') BETWEEN b.neu_datum AND b.aender_datum THEN 1 * b.menge_ist ELSE 0 END) AS "01 Nov 2019"
FROM
artikel a, bestand b
WHERE
b.artikelnr IN ('273632002', .... (huge long list of numbers) ....)
AND b.artikelnr = a.artikelnr
GROUP BY
a.artbez1, b.artikelnr;
This returns for example:
ARTIKELBEZEICHNUNG
ARTIKELNUMMER
01 Nov 2019
SC-4400.CW
220450002
39
S-320.FK120
220502004
0
H-595.FK120
220800004
35
AC-548.FK209
220948032
0
AS-6800.CW
221355002
20
I would like return this for each day of the Month and then from that return the Min and Max Value for each Article
I have the following SQL to return the days of a given Month and was wondering if anyone had any ideas on how they could be combined (If at all possible):
SELECT to_date('01.11.2019','dd.mm.yyyy')+LEVEL-1
FROM dual
CONNECT BY LEVEL <= TO_CHAR(LAST_DAY(to_date('01.11.2019','dd.mm.yyyy')),'DD')
DATES
2019-11-01 00:00:00
2019-11-02 00:00:00
2019-11-03 00:00:00
2019-11-04 00:00:00
2019-11-05 00:00:00
2019-11-06 00:00:00
2019-11-07 00:00:00
The result i am try to get would be something like:
ARTIKELBEZEICHNUNG
ARTIKELNUMMER
Nov 19 Min
Nov 19 Max
SC-4400.CW
220450002
5
39
S-320.FK120
220502004
0
15
H-595.FK120
220800004
2
35
AC-548.FK209
220948032
0
0
AS-6800.CW
221355002
10
20
Is this at all possible in SQL?
Thanks for taking the time to read my post.
JeRi
You can use a partitioned outer join:
WITH calendar ( day ) AS (
SELECT DATE '2019-11-01'
FROM DUAL
UNION ALL
SELECT day + INTERVAL '1' DAY
FROM calendar
WHERE day < LAST_DAY( DATE '2019-11-01' )
),
daily_totals ( artbez1, Artikelnr, Day, total_menge_ist ) AS (
SELECT MAX( ab.artbez1 ),
ab.artikelnr,
c.day,
COALESCE( SUM( ab.menge_ist ), 0 )
FROM calendar c
LEFT OUTER JOIN
( SELECT a.artikelnr,
a.artbez1,
b.neu_datum,
b.aender_datum,
b.menge_ist
FROM artikel a
LEFT JOIN bestand b
ON ( a.artikelnr = b.artikelnr )
-- WHERE b.artikelnr IN ('273632002', .... (huge long list of numbers) ....)
) ab
PARTITION BY ( ab.artikelnr, ab.artbez1 )
ON ( c.day BETWEEN ab.neu_datum AND ab.aender_datum )
GROUP BY ab.artikelnr, c.day
)
SELECT MAX( artbez1 ) AS Artikelbezeichnung,
artikelnr AS Artikelnummer,
TRUNC( day, 'MM' ) AS month,
MIN( total_menge_ist ) AS min_total_menge_ist,
MAX( total_menge_ist ) AS max_total_menge_ist
FROM daily_totals
GROUP BY artikelnr, TRUNC( day, 'MM' );
Which, for the sample data:
CREATE TABLE artikel ( artikelnr, artbez1 ) AS
SELECT 220450002, 'SC-4400.CW' FROM DUAL UNION ALL
SELECT 220502004, 'S-320.FK120' FROM DUAL UNION ALL
SELECT 220800004, 'H-595.FK120' FROM DUAL UNION ALL
SELECT 220948032, 'AC-548.FK209' FROM DUAL UNION ALL
SELECT 221355002, 'AS-6800.CW' FROM DUAL;
CREATE TABLE bestand ( artikelnr, neu_datum, aender_datum, menge_ist ) AS
SELECT 220450002, DATE '2019-10-30', DATE '2019-11-01', 20 FROM DUAL UNION ALL
SELECT 220450002, DATE '2019-11-01', DATE '2019-11-05', 19 FROM DUAL UNION ALL
SELECT 220502004, DATE '2019-11-05', DATE '2019-11-03', 5 FROM DUAL UNION ALL
SELECT 220800004, DATE '2019-11-01', DATE '2019-11-15', 35 FROM DUAL UNION ALL
SELECT 221355002, DATE '2019-10-20', DATE '2019-11-05', 5 FROM DUAL UNION ALL
SELECT 221355002, DATE '2019-10-25', DATE '2019-11-10', 5 FROM DUAL UNION ALL
SELECT 221355002, DATE '2019-10-28', DATE '2019-11-13', 5 FROM DUAL UNION ALL
SELECT 221355002, DATE '2019-10-30', DATE '2019-11-15', 5 FROM DUAL UNION ALL
SELECT 221355002, DATE '2019-11-05', DATE '2019-11-20', 5 FROM DUAL;
Outputs:
ARTIKELBEZEICHNUNG | ARTIKELNUMMER | MONTH | MIN_TOTAL_MENGE_IST | MAX_TOTAL_MENGE_IST
:----------------- | ------------: | :------------------ | ------------------: | ------------------:
SC-4400.CW | 220450002 | 2019-11-01 00:00:00 | 0 | 39
S-320.FK120 | 220502004 | 2019-11-01 00:00:00 | 0 | 0
AC-548.FK209 | 220948032 | 2019-11-01 00:00:00 | 0 | 0
H-595.FK120 | 220800004 | 2019-11-01 00:00:00 | 0 | 35
AS-6800.CW | 221355002 | 2019-11-01 00:00:00 | 0 | 25
db<>fiddle here
ID EFF_DT END_DT
FLA1 2018-01-01 00:00:00 2019-12-31 00:00:00
FLA1 2020-01-01 00:00:00 9999-12-31 00:00:00
The above structure needs to be splited. And the split should be based on the date.
the output should have additional column as year
ID EFF_DT END_DT YEAR
FLA1 2018-01-01 00:00:00 2019-12-31 00:00:00 2019
FLA1 2020-01-01 00:00:00 2020-12-31 00:00:00 2020
FLA1 2021-01-01 00:00:00 9999-12-31 00:00:00 2021
I am using union for this purpose and it is generating duplicates. Any other approach / refine solution will work. Thanks in advance.
You can use a recursive sub-query factoring clause:
WITH split ( ID, EFF_DT, END_DT, MAX_DT ) AS (
SELECT id,
eff_dt,
LEAST(
ADD_MONTHS( TRUNC( SYSDATE, 'YY' ), 12 ) - INTERVAL '1' DAY,
end_dt
),
end_dt
FROM table_name
UNION ALL
SELECT id,
end_dt + INTERVAL '1' DAY,
max_dt,
max_dt
FROM split
WHERE end_dt < max_dt
)
SELECT id,
eff_dt,
end_dt
FROM split;
Which, for your sample data:
CREATE TABLE table_name ( ID, EFF_DT, END_DT ) AS
SELECT 'FLA1', DATE '2018-01-01', DATE '2019-12-31' FROM DUAL UNION ALL
SELECT 'FLA1', DATE '2020-01-01', DATE '9999-12-31' FROM DUAL;
Outputs:
ID | EFF_DT | END_DT
:--- | :------------------ | :------------------
FLA1 | 2018-01-01 00:00:00 | 2019-12-31 00:00:00
FLA1 | 2020-01-01 00:00:00 | 2020-12-31 00:00:00
FLA1 | 2021-01-01 00:00:00 | 9999-12-31 00:00:00
db<>fiddle here
If you want to generate all years of data, then:
with cte (id, eff_dt, end_dt, orig_end_dt)
select id, eff_dt, end_dt, end_dt
from t
union all
select cte.id, end_dt + interval '1' day,
least(orig_end_dte, trunc(end_dt, 'YYYY') + interval '1' year
from cte
where trunc(eff_dt, 'YYYY') < trunc(end_dt, 'YYYY')
)
select id, eff_dt, end_dt, to_char(end_dt, 'YYYY') as year
from cte;
Note: This produces a separate row for every year in the period.
If you want a limit on the year, then it would be something like this:
with cte (id, eff_dt, end_dt, orig_end_dt)
select id, eff_dt, end_dt, end_dt
from t
union all
select cte.id, end_dt + interval '1' day,
least(orig_end_dte, trunc(end_dt, 'YYYY') + interval '1' year
from cte
where trunc(eff_dt, 'YYYY') < least(trunc(end_dt, 'YYYY'), date '2021-01-01')
)
select id, eff_dt,
(case when end_dt = date '2021-12-31' then orig_end_dt else end_dt end),
to_char(end_dt, 'YYYY') as year
from cte;
I'm trying to count the records in my table and grouping them by hour, i'm getting results with my query but I want it to return every hour even if there are no records.
My current query is,
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM TABLE
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH');
This returns a result like this,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
What I want is for it to return every hour between my 2 dates so,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
0 | 23-Jan-19 09:00:00
0 | 23-Jan-19 10:00:00
0 | 23-Jan-19 11:00:00
0 | 23-Jan-19 12:00:00
0 | 23-Jan-19 13:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
--......
0 | 24-Jan-19 00:00:00
0 | 24-Jan-19 01:00:00
0 | 24-Jan-19 02:00:00
--and so on
To fill the holes in the transaction hours you create first a complete table of hours.
You may use Recursive Subquery Factoring to do it
WITH hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit here */
)
select * from hour_table;
TRANSACTIONDATE
-------------------
23.01.2019 07:00:00
23.01.2019 08:00:00
...
24.01.2019 05:00:00
24.01.2019 06:00:00
Note that you use the staring and ending date in this query, the starting date must be exact an hour.
Next step is as simple as to outer join this hour table to your aggregation and set the default value for the missing hours with NVL.
with hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit */
),
agg as (
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM "TABLE"
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH')
)
select t.TRANSACTIONDATE, nvl(transactioncount,0) transactioncount
from hour_table t
left outer join agg a
on t.TRANSACTIONDATE = a.TRANSACTIONDATE
order by 1;
You might consider using the following with CONNECT BY level logic :
SELECT sum(transactioncount) as transactioncount, transactiondate
FROM
(
with "TABLE"(date_modified) as
(
SELECT timestamp'2019-01-23 08:00:00' FROM dual union all
SELECT timestamp'2019-01-23 08:30:00' FROM dual union all
SELECT timestamp'2019-01-23 09:00:00' FROM dual union all
SELECT timestamp'2019-01-24 05:01:00' FROM dual
)
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'hh24') as transactiondate
FROM "TABLE" t
GROUP BY trunc(date_modified, 'HH24')
UNION ALL
SELECT 0, timestamp'2019-01-23 07:00:00' + ( level - 1 )/24
FROM dual
CONNECT BY level <= 24 * extract( day from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') +
extract( hour from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') + 1
)
GROUP BY transactiondate
ORDER BY transactiondate
Rextester Demo