I'm sure there is an easy way to do this in SQL Oracle, but I'm not able to find it.
I have a table ( temp ) , with 3 columns (id_1, id_2, date ), where date is Date type. Each line is unique.
The output I want is to repeat each line 15 times, with a new date columns where I get in one row the original date, in the second the original + 1 day, in the third the original + 2 days, etc, for each original row...
Define a helper CTO with 15 rows and simple cross joinit with your original table
with days as (
select rownum -1 as day_offset
from dual connect by level <= 15)
select
a.id1, a.id2, a.date_d + b.day_offset new_date_d
from tab a
cross join days b
order by 1,2,3;
Example - for the sample data ...
select * from tab;
ID1 ID2 DATE_D
---------- ---------- -------------------
1 1 01.01.2020 00:00:00
2 2 01.01.2019 00:00:00
... you will get the following output
ID1 ID2 NEW_DATE_D
---------- ---------- -------------------
1 1 01.01.2020 00:00:00
1 1 02.01.2020 00:00:00
1 1 03.01.2020 00:00:00
1 1 04.01.2020 00:00:00
1 1 05.01.2020 00:00:00
....
2 2 13.01.2019 00:00:00
2 2 14.01.2019 00:00:00
2 2 15.01.2019 00:00:00
30 rows selected.
Alternativly you may use recursive subquery factoring
The *day offset is calculated recursively as days.day_offset + 1 is limited to 15 and used to build the new date value
with days( id1, id2, date_d, day_offset) as (
select id1, id2, date_d, 0 day_offset from tab
union all
select days.id1, days.id2, tab.date_d + days.day_offset + 1 as date_d,
days.day_offset + 1 as day_offset
from tab
join days
on tab.id1 = days.id1 and tab.id2 = days.id2
and days.day_offset +1 < 15)
select * from days
order by 1,2,3
Related
Assume this is my table:
ID DATE
--------------
1 2018-11-12
2 2018-11-13
3 2018-11-14
4 2018-11-15
5 2018-11-16
6 2019-03-05
7 2019-05-07
8 2019-05-08
9 2019-05-08
I need to have partitions be determined by the first date in the partition. Where, any date that is within 2 days of the first date, belongs in the same partition.
The table would end up looking like this if each partition was ranked
PARTITION ID DATE
------------------------
1 1 2018-11-12
1 2 2018-11-13
1 3 2018-11-14
2 4 2018-11-15
2 5 2018-11-16
3 6 2019-03-05
4 7 2019-05-07
4 8 2019-05-08
4 9 2019-05-08
I've tried using datediff with lag to compare to the previous date but that would allow a partition to be inappropriately sized based on spacing, for example all of these dates would be included in the same partition:
ID DATE
--------------
1 2018-11-12
2 2018-11-14
3 2018-11-16
4 2018-11-18
3 2018-11-20
4 2018-11-22
Previous flawed attempt:
Mark when a date is more than 2 days past the previous date:
(case when datediff(day, lag(event_time, 1) over (partition by user_id, stage order by event_time), event_time) > 2 then 1 else 0 end)
You need to use a recursive CTE for this, so the operation is expensive.
with t as (
-- add an incrementing column with no gaps
select t.*, row_number() over (order by date) as seqnum
from t
),
cte as (
select id, date, date as mindate, seqnum
from t
where seqnum = 1
union all
select t.id, t.date,
(case when t.date <= dateadd(day, 2, cte.mindate)
then cte.mindate else t.date
end) as mindate,
t.seqnum
from cte join
t
on t.seqnum = cte.seqnum + 1
)
select cte.*, dense_rank() over (partition by mindate) as partition_num
from cte;
I am learning SQL and I was wondering how to select active users by month, depending on their starting and ending date (both timestamp(6)). My table looks like this:
Cust_Num | Start_Date | End_Date
1 | 2018-01-01 | 2019-01-01
2 | 2018-01-01 | NULL
3 | 2019-01-01 | 2019-06-01
4 | 2017-01-01 | 2019-03-01
So, counting the active users by month, I should have an output like:
As of. | Count
2018-06-01 | 3
...
2019-02-01 | 3
2019-07-01 | 1
So far, I do a manual operation by entering each month:
Select
201906,
count(distinct a.cust_num)
From
active_users a
Where
to_date(‘20190630’,’yyyymmdd) between a.start_date and nvl (a.end_date, ‘31-dec-9999)
union all
Select
201905,
count(distinct a.cust_num)
From
active_users a
Where
to_date(‘20190531’,’yyyymmdd) between a.start_date and nvl (a.end_date, ‘31-dec-9999)
union all
...
Not very optimized and sustainable if I want to enter 10 years ao 120 months lol.
Any help is welcome. Thanks a lot!
This query shows the active-user-count effective as-of the end of the month.
How it works:
Convert each input row (with StartDate and EndDate value) into two rows that represent a point-in-time when the active-user-count incremented (on StartDate) and decremented (on EndDate). We need to convert NULL to a far-off date value because NULL values are sorted before instead of after non-NULL values:
This makes your data look like this:
OnThisDate Change
2018-01-01 1
2019-01-01 -1
2018-01-01 1
9999-12-31 -1
2019-01-01 1
2019-06-01 -1
2017-01-01 1
2019-03-01 -1
Then we simply SUM OVER the Change values (after sorting) to get the active-user-count as of that specific date:
So first, sort by OnThisDate:
OnThisDate Change
2017-01-01 1
2018-01-01 1
2018-01-01 1
2019-01-01 1
2019-01-01 -1
2019-03-01 -1
2019-06-01 -1
9999-12-31 -1
Then SUM OVER:
OnThisDate ActiveCount
2017-01-01 1
2018-01-01 2
2018-01-01 3
2019-01-01 4
2019-01-01 3
2019-03-01 2
2019-06-01 1
9999-12-31 0
Then we PARTITION (not group!) the rows by month and sort them by their date so we can identify the last ActiveCount row for that month (this actually happens in the WHERE of the outermost query, using ROW_NUMBER() and COUNT() for each month PARTITION):
OnThisDate ActiveCount IsLastInMonth
2017-01-01 1 1
2018-01-01 2 0
2018-01-01 3 1
2019-01-01 4 0
2019-01-01 3 1
2019-03-01 2 1
2019-06-01 1 1
9999-12-31 0 1
Then filter on that where IsLastInMonth = 1 (actually, where ROW_COUNT() = COUNT(*) inside each PARTITION) to give us the final output data:
At-end-of-month Active-count
2017-01 1
2018-01 3
2019-01 3
2019-03 2
2019-06 1
9999-12 0
This does result in "gaps" in the result-set because the At-end-of-month column only shows rows where the Active-count value actually changed rather than including all possible calendar months - but that's ideal (as far as I'm concerned) because it excludes redundant data. Filling in the gaps can be done inside your application code by simply repeating output rows for each additional month until it reaches the next At-end-of-month value.
Here's the query using T-SQL on SQL Server (I don't have access to Oracle right now). And here's the SQLFiddle I used to come to a solution: http://sqlfiddle.com/#!18/ad68b7/24
SELECT
OtdYear,
OtdMonth,
ActiveCount
FROM
(
-- This query adds columns to indicate which row is the last-row-in-month ( where RowInMonth == RowsInMonth )
SELECT
OnThisDate,
OtdYear,
OtdMonth,
ROW_NUMBER() OVER ( PARTITION BY OtdYear, OtdMonth ORDER BY OnThisDate ) AS RowInMonth,
COUNT(*) OVER ( PARTITION BY OtdYear, OtdMonth ) AS RowsInMonth,
ActiveCount
FROM
(
SELECT
OnThisDate,
YEAR( OnThisDate ) AS OtdYear,
MONTH( OnThisDate ) AS OtdMonth,
SUM( [Change] ) OVER ( ORDER BY OnThisDate ASC ) AS ActiveCount
FROM
(
SELECT
StartDate AS [OnThisDate],
1 AS [Change]
FROM
tbl
UNION ALL
SELECT
ISNULL( EndDate, DATEFROMPARTS( 9999, 12, 31 ) ) AS [OnThisDate],
-1 AS [Change]
FROM
tbl
) AS sq1
) AS sq2
) AS sq3
WHERE
RowInMonth = RowsInMonth
ORDER BY
OtdYear,
OtdMonth
This query can be flattened into fewer nested queries by using aggregate and window functions directly instead of using aliases (like OtdYear, ActiveCount, etc) but that would make the query much harder to understand.
I have created the query which will give the result of all the months starting from the minimum start date in the table till maximum end date.
You can change it using adding one condition in WHERE clause.
-- table creation
CREATE TABLE ACTIVE_USERS (CUST_NUM NUMBER, START_DATE DATE, END_DATE DATE)
-- data creation
INSERT INTO ACTIVE_USERS
SELECT * FROM
(
SELECT 1, DATE '2018-01-01', DATE '2019-01-01' FROM DUAL UNION ALL
SELECT 2, DATE '2018-01-01', NULL FROM DUAL UNION ALL
SELECT 3, DATE '2019-01-01', DATE '2019-06-01' FROM DUAL UNION ALL
SELECT 4, DATE '2017-01-01', DATE '2019-03-01' FROM DUAL
)
-- data in the actual table
SELECT * FROM ACTIVE_USERS ORDER BY CUST_NUM;
CUST_NUM START_DATE END_DATE
---------- ---------- ----------
1 2018-01-01 2019-01-01
2 2018-01-01
3 2019-01-01 2019-06-01
4 2017-01-01 2019-03-01
Query to fetch desired result
WITH CTE ( START_DATE, END_DATE ) AS
(
SELECT
ADD_MONTHS( START_DATE, LEVEL - 1 ),
ADD_MONTHS( START_DATE, LEVEL ) - 1
FROM
(
SELECT
MIN( START_DATE ) AS START_DATE,
MAX( END_DATE ) AS END_DATE
FROM
ACTIVE_USERS
)
CONNECT BY LEVEL <= CEIL( MONTHS_BETWEEN( END_DATE, START_DATE ) ) + 1
)
--
--
SELECT
C.START_DATE,
COUNT(1) AS CNT
FROM
CTE C
JOIN ACTIVE_USERS D ON
(
C.END_DATE BETWEEN
D.START_DATE
AND
CASE
WHEN D.END_DATE IS NOT NULL THEN D.END_DATE
ELSE C.END_DATE
END
)
GROUP BY
C.START_DATE
ORDER BY
C.START_DATE;
-- output --
START_DATE CNT
---------- ----------
2017-01-01 1
2017-02-01 1
2017-03-01 1
2017-04-01 1
2017-05-01 1
2017-06-01 1
2017-07-01 1
2017-08-01 1
2017-09-01 1
2017-10-01 1
2017-11-01 1
START_DATE CNT
---------- ----------
2017-12-01 1
2018-01-01 3
2018-02-01 3
2018-03-01 3
2018-04-01 3
2018-05-01 3
2018-06-01 3
2018-07-01 3
2018-08-01 3
2018-09-01 3
2018-10-01 3
START_DATE CNT
---------- ----------
2018-11-01 3
2018-12-01 3
2019-01-01 3
2019-02-01 3
2019-03-01 2
2019-04-01 2
2019-05-01 2
2019-06-01 1
30 rows selected.
Cheers!!
I have a table UPCALL_HISTORY that has 3 columns: SUBSCRIBER_ID, START_DATE and END_DATE. Let the number of unique subscribers be N.
I want to create a new table with 3 columns:
SUBSCRIBER_ID: All of the unique subscriber ids repeated 36 times in a row.
MONTHLY_CALENDAR_ID: For each SUBSCRIBER_ID, this column will have dates listed from July 2015 until July 2018 (36 months).
ACTIVE: This column will be used as a flag for each subscriber and whether they have a subscription during that month. This subscription data is in a table called UPCALL_HISTORY.
I am fairly new to SQL, don't have a lot of experience. I am good at Python but it seems that SQL doesn't work like Python.
Any query ideas that could help me build this table?
Let my UPCALL_HISTORY table be:
+---------------+------------+------------+
| SUBSCRIBER_ID | START_DATE | END_DATE |
+---------------+------------+------------+
| 119 | 01/07/2015 | 01/08/2015 |
| 120 | 01/08/2015 | 01/09/2015 |
| 121 | 01/09/2015 | 01/10/2015 |
+---------------+------------+------------+
I want a table that looks like:
+---------------+------------+--------+
| SUBSCRIBER_ID | MON_CA | ACTIVE |
+---------------+------------+--------+
| 119 | 01/07/2015 | 1 |
| * | 01/08/2015 | 0 |
| * | 01/09/2015 | 0 |
| (36 times) | 01/10/2015 | 0 |
| * | * | 0 |
| 119 | 01/07/2018 | 0 |
+---------------+------------+--------+
that continues for 120 and 121
EDIT: Added Example
Here's how I understood the question.
Sample table and several rows:
SQL> create table upcall_history
2 (subscriber_id number,
3 start_date date,
4 end_date date);
Table created.
SQL> insert into upcall_history
2 select 1, date '2015-12-25', date '2016-01-13' from dual union
3 select 1, date '2017-07-10', date '2017-07-11' from dual union
4 select 2, date '2018-01-01', date '2018-04-24' from dual;
3 rows created.
Create a new table. For distinct SUBSCRIBER_ID's, it creates 36 "monthly" rows, fixed (as you stated).
SQL> create table new_table as
2 select
3 x.subscriber_id,
4 add_months(date '2015-07-01', column_value - 1) monthly_calendar_id,
5 0 active
6 from (select distinct subscriber_id from upcall_history) x,
7 table(cast(multiset(select level from dual
8 connect by level <= 36
9 ) as sys.odcinumberlist));
Table created.
Update ACTIVE column value to "1" for rows whose MONTHLY_CALENDAR_ID is contained in START_DATE and END_DATE of the UPCALL_HISTORY table.
SQL> merge into new_table n
2 using (select subscriber_id, start_date, end_date from upcall_history) x
3 on ( n.subscriber_id = x.subscriber_id
4 and n.monthly_calendar_id between trunc(x.start_date, 'mm')
5 and trunc(x.end_date, 'mm')
6 )
7 when matched then
8 update set n.active = 1;
7 rows merged.
SQL>
Result (only ACTIVE = 1):
SQL> select * from new_table
2 where active = 1
3 order by subscriber_id, monthly_calendar_id;
SUBSCRIBER_ID MONTHLY_CA ACTIVE
------------- ---------- ----------
1 2015-12-01 1
1 2016-01-01 1
1 2017-07-01 1
2 2018-01-01 1
2 2018-02-01 1
2 2018-03-01 1
2 2018-04-01 1
7 rows selected.
SQL>
If you're on 12c you can use an inline view of all the months with cross apply to get the combinations of those with all IDs:
select uh.subscriber_id, m.month,
case when trunc(uh.start_date, 'MM') <= m.month
and (uh.end_date is null or uh.end_date >= add_months(m.month, 1))
then 1 else 0 end as active
from upcall_history uh
cross apply (
select add_months(trunc(sysdate, 'MM'), - level) as month
from dual
connect by level <= 36
) m
order by uh.subscriber_id, m.month;
I've made it a rolling 36-months window up to the current month, but you may actually want fixed dates as you had in the question.
With sample data from a CTE:
with upcall_history (subscriber_id, start_date, end_date) as (
select 1, date '2015-09-04', '2015-12-15' from dual
union all select 2, date '2017-12-04', '2018-05-15' from dual
)
that generates 72 rows:
SUBSCRIBER_ID MONTH ACTIVE
------------- ---------- ----------
1 2015-07-01 0
1 2015-08-01 0
1 2015-09-01 1
1 2015-10-01 1
1 2015-11-01 1
1 2015-12-01 0
1 2016-01-01 0
...
2 2017-11-01 0
2 2017-12-01 1
2 2018-01-01 1
2 2018-02-01 1
2 2018-03-01 1
2 2018-04-01 1
2 2018-05-01 0
2 2018-06-01 0
You can use that to create a new table, or populate an existing table; though if you do want a rolling window then a view might be more appropriate.
If you aren't on 12c then cross apply isn't available - you'll get "ORA-00905: missing keyword".
You can get the same result with two CTEs (one to get all the months, the other to get all the IDs) cross-joined, and then outer joined to your actual data:
with m (month) as (
select add_months(trunc(sysdate, 'MM'), - level)
from dual
connect by level <= 36
),
i (subscriber_id) as (
select distinct subscriber_id
from upcall_history
)
select i.subscriber_id, m.month,
case when uh.subscriber_id is null then 0 else 1 end as active
from m
cross join i
left join upcall_history uh
on uh.subscriber_id = i.subscriber_id
and trunc(uh.start_date, 'MM') <= m.month
and (uh.end_date is null or uh.end_date >= add_months(m.month, 1))
order by i.subscriber_id, m.month;
You can do this in 11g using Partitioned Outer Joins, like so:
WITH upcall_history AS (SELECT 119 subscriber_id, to_date('01/07/2015', 'dd/mm/yyyy') start_date, to_date('01/08/2015', 'dd/mm/yyyy') end_date FROM dual UNION ALL
SELECT 120 subscriber_id, to_date('01/08/2015', 'dd/mm/yyyy') start_date, to_date('01/09/2015', 'dd/mm/yyyy') end_date FROM dual UNION ALL
SELECT 121 subscriber_id, to_date('01/09/2015', 'dd/mm/yyyy') start_date, to_date('01/10/2015', 'dd/mm/yyyy') end_date FROM dual),
mnths AS (SELECT add_months(TRUNC(SYSDATE, 'mm'), + 1 - LEVEL) mnth
FROM dual
CONNECT BY LEVEL <= 12 * 3 + 1)
SELECT uh.subscriber_id,
m.mnth,
CASE WHEN mnth BETWEEN start_date AND end_date - 1 THEN 1 ELSE 0 END active
FROM mnths m
LEFT OUTER JOIN upcall_history uh PARTITION BY (uh.subscriber_id) ON (1=1)
ORDER BY uh.subscriber_id,
m.mnth;
SUBSCRIBER_ID MNTH ACTIVE
------------- ----------- ----------
119 01/07/2015 1
119 01/08/2015 0
119 01/09/2015 0
119 01/10/2015 0
<snip>
119 01/06/2018 0
119 01/07/2018 0
--
120 01/07/2015 0
120 01/08/2015 1
120 01/09/2015 0
120 01/10/2015 0
<snip>
120 01/06/2018 0
120 01/07/2018 0
--
121 01/07/2015 0
121 01/08/2015 0
121 01/09/2015 1
121 01/10/2015 0
<snip>
121 01/06/2018 0
121 01/07/2018 0
N.B. I have assumed some things about your start/end dates and what constitutes active; hopefully it should be easy enough for you to tweak the case statement to fit the logic that works best for your situation.
I also believe this is can be an example for CROSS JOIN. All I had to do was create a small table of all of the dates and then CROSS JOIN it with the table of subscribers.
Example: https://www.essentialsql.com/cross-join-introduction/
Suppose I have my table like:
uid day_used_app
--- -------------
1 2012-04-28
1 2012-04-29
1 2012-04-30
2 2012-04-29
2 2012-04-30
2 2012-05-01
2 2012-05-21
2 2012-05-22
Suppose I want the number of unique users who returned to the app at least 2 different days in the last 7 days (from 2012-05-03).
So as an example to retrieve the number of users who have used the application on at least 2 different days in the past 7 days:
select count(distinct case when num_different_days_on_app >= 2
then uid else null end) as users_return_2_or_more_days
from (
select uid,
count(distinct day_used_app) as num_different_days_on_app
from table
where day_used_app between current_date() - 7 and current_date()
group by 1
)
This gives me:
users_return_2_or_more_days
---------------------------
2
The question I have is:
What if I want to do this for every day up to now so that my table looks like this, where the second field equals the number of unique users who returned 2 or more different days within a week prior to the date in the first field.
date users_return_2_or_more_days
-------- ---------------------------
2012-04-28 2
2012-04-29 2
2012-04-30 3
2012-05-01 4
2012-05-02 4
2012-05-03 3
Would this help?
WITH
-- your original input, don't use in "real" query ...
input(uid,day_used_app) AS (
SELECT 1,DATE '2012-04-28'
UNION ALL SELECT 1,DATE '2012-04-29'
UNION ALL SELECT 1,DATE '2012-04-30'
UNION ALL SELECT 2,DATE '2012-04-29'
UNION ALL SELECT 2,DATE '2012-04-30'
UNION ALL SELECT 2,DATE '2012-05-01'
UNION ALL SELECT 2,DATE '2012-05-21'
UNION ALL SELECT 2,DATE '2012-05-22'
)
-- end of input, start "real" query here, replace ',' with 'WITH'
,
one_week_b4 AS (
SELECT
uid
, day_used_app
, day_used_app -7 AS day_used_1week_b4
FROM input
)
SELECT
one_week_b4.uid
, one_week_b4.day_used_app
, count(*) AS users_return_2_or_more_days
FROM one_week_b4
JOIN input
ON input.day_used_app BETWEEN one_week_b4.day_used_1week_b4 AND one_week_b4.day_used_app
GROUP BY
one_week_b4.uid
, one_week_b4.day_used_app
HAVING count(*) >= 2
ORDER BY 1;
Output is:
uid|day_used_app|users_return_2_or_more_days
1|2012-04-29 | 3
1|2012-04-30 | 5
2|2012-04-29 | 3
2|2012-04-30 | 5
2|2012-05-01 | 6
2|2012-05-22 | 2
Does that help your needs?
Marco the Sane ...
SELECT DISTINCT
t1.day_used_app,
(
SELECT SUM(CASE WHEN t.num_visits >= 2 THEN 1 ELSE 0 END)
FROM
(
SELECT uid,
COUNT(DISTINCT day_used_app) AS num_visits
FROM table
WHERE day_used_app BETWEEN t1.day_used_app - 7 AND t1.day_used_app
GROUP BY uid
) t
) AS users_return_2_or_more_days
FROM table t1
I have a table with many IDs and many dates associated with each ID, and even a few IDs with no date. For each ID and date combination, I want to select the ID, date, and the next largest date also associated with that same ID, or null as next date if none exists.
Sample Table:
ID Date
1 5/1/10
1 6/1/10
1 7/1/10
2 6/15/10
3 8/15/10
3 8/15/10
4 4/1/10
4 4/15/10
4
Desired Output:
ID Date Next_Date
1 5/1/10 6/1/10
1 6/1/10 7/1/10
1 7/1/10
2 6/15/10
3 8/15/10
3 8/15/10
4 4/1/10 4/15/10
4 4/15/10
SELECT
mytable.id,
mytable.date,
(
SELECT
MIN(mytablemin.date)
FROM mytable AS mytablemin
WHERE mytablemin.date > mytable.date
AND mytable.id = mytablemin.id
) AS NextDate
FROM mytable
This has been tested on SQL Server 2008 R2 (but it should work on other DBMSs) and produces the following output:
id date NextDate
----------- ----------------------- -----------------------
1 2010-05-01 00:00:00.000 2010-06-01 00:00:00.000
1 2010-06-01 00:00:00.000 2010-06-15 00:00:00.000
1 2010-07-01 00:00:00.000 2010-08-15 00:00:00.000
2 2010-06-15 00:00:00.000 2010-07-01 00:00:00.000
3 2010-08-15 00:00:00.000 NULL
3 2010-08-15 00:00:00.000 NULL
4 2010-04-01 00:00:00.000 2010-04-15 00:00:00.000
4 2010-04-15 00:00:00.000 2010-05-01 00:00:00.000
4 NULL NULL
Update 1:
For those that are interested, I've compared the performance of the two variants in SQL Server 2008 R2 (one uses MIN aggregate and the other uses TOP 1 with an ORDER BY):
Without an index on the date column, the MIN version had a cost of 0.0187916 and the TOP/ORDER BY version had a cost of 0.115073 so the MIN version was "better".
With an index on the date column, they performed identically.
Note that this was testing with just these 9 records so the results could be (very) spurious...
Update 2:
The results hold for 10,000 uniformly distributed random records. The TOP/ORDER BY query takes so long to run at 100,000 records I had to cancel it and give up.
If your db is oracle, you can use lead() and lag() functions.
SELECT id, date,
LEAD(date, 1, 0) OVER (PARTITION BY ID ORDER BY Date DESC NULLS LAST) NEXT_DATE,
FROM Your_table
ORDER BY ID;
SELECT
id,
date,
( SELECT date
FROM table t1
WHERE t1.date > t2.date
ORDER BY t1.date LIMIT 1 )
FROM table t2
I think self JOIN would be faster than subselect.
WITH dates AS (
SELECT 1 AS ID, '2010-05-01' AS Date
UNION ALL SELECT 1, '2010-06-01'
UNION ALL SELECT 1, '2010-07-01'
UNION ALL SELECT 2, '2010-06-15'
UNION ALL SELECT 3, '2010-08-15'
UNION ALL SELECT 3, '2010-08-15'
UNION ALL SELECT 4, '2010-04-01'
UNION ALL SELECT 4, '2010-04-15'
UNION ALL SELECT 4, ''
)
SELECT
dates.ID,
dates.Date,
nextDates.Date AS Next_Date
FROM
dates
LEFT JOIN
dates nextDates
ON nextDates.ID = dates.ID
AND nextDates.Date > dates.Date
LEFT JOIN
dates noLower
ON noLower.ID = nextDates.ID
AND noLower.Date < nextDates.Date
AND noLower.Date > dates.Date
WHERE
dates.Date > 0
AND noLower.ID IS NULL
https://www.db-fiddle.com/f/4sWRLt2hxjik5HqiJ21ez8/1