Oracle Dynamic Range - sql

So, i have this sample data:
Department | InitialDate | FinalDate
-------------------------------------------------------
1 | 01/01/2017 01:12:00 | 01/03/2017 00:00:08
1 | 01/03/2017 00:00:08 | 01/04/2017 05:00:01
1 | 01/04/2017 05:00:01 | 01/05/2017 02:00:00
2 | 01/05/2017 10:00:00 | 01/06/2017 11:00:08
2 | 01/06/2017 11:00:08 | 01/07/2017 04:04:00
3 | 01/07/2017 04:00:00 | 01/07/2017 15:00:22
1 | 01/07/2017 14:00:00 | 01/07/2017 18:00:08
1 | 01/07/2017 18:15:00 | 01/08/2017 22:00:00
3 | 01/12/2017 01:30:03 | 01/12/2017 18:00:00
1 | 01/13/2017 23:12:00 | 01/13/2017 23:59:08
and want to group it like this
Department | InitialDate | FinalDate
-------------------------------------------------------
1 | 01/01/2017 01:12:00 | 01/05/2017 02:00:00
2 | 01/05/2017 10:00:00 | 01/07/2017 04:04:00
3 | 01/07/2017 04:00:00 | 01/07/2017 15:00:22
1 | 01/07/2017 14:00:00 | 01/08/2017 22:00:00
3 | 01/12/2017 01:30:03 | 01/12/2017 18:00:00
1 | 01/13/2017 23:12:00 | 01/13/2017 23:59:08
I need to make groups by department and get the first and last date of each group, but the departments can repeat and for each time it occurs, I want the first and last date of that specific window. I already tried Analytic functions but nothing seems to work.

You can do it using the LAG analytic function to compare each row with the previous row:
SELECT department,
MIN( InitialDate ) AS InitialDate,
MIN( FinalDate ) AS FinalDate
FROM (
SELECT department,
InitialDate,
FinalDate,
SUM( grp_inc ) OVER ( ORDER BY FinalDate ) AS grp
FROM (
SELECT department,
InitialDate,
FinalDate,
CASE WHEN LAG( department ) OVER ( ORDER BY FinalDate ) = department
THEN 0
ELSE 1
END AS grp_inc
FROM table_name
)
)
GROUP BY department, grp

This is a type of "gaps-and-islands" problem. One method of solving it is by determining where groups of overlapping times start. Then use a cumulative sum to define each group:
select departmentid, min(initialdate), max(finaldate)
from (select t.*, sum(grp_starts) over (partition by departmentid order by initialdate) as grp
from (select t.*,
(case when exists (select 1
from t t2
where t2.departmentid = t.departmentid and
t.initialdate > t2.initialdate and
t.initialdate <= t2.finaldate
)
then 0 else 1
end) as grp_starts
from t
) t
) t
group by departmentid, grp;

Since you are looking for where the department changes and not where the department changes or the initialdate is not the same as the previous row's finaldate, you can use tabibitosan
WITH sample_data AS (SELECT 1 department, to_date('01/01/2017 01:12:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/03/2017 00:00:08', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 1 department, to_date('01/03/2017 00:00:08', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/04/2017 05:00:01', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 1 department, to_date('01/04/2017 05:00:01', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/05/2017 02:00:00', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 2 department, to_date('01/05/2017 10:00:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/06/2017 11:00:08', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 2 department, to_date('01/06/2017 11:00:08', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/07/2017 04:04:00', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 3 department, to_date('01/07/2017 04:00:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/07/2017 15:00:22', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 1 department, to_date('01/07/2017 14:00:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/07/2017 18:00:08', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 1 department, to_date('01/07/2017 18:15:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/08/2017 22:00:00', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 3 department, to_date('01/12/2017 01:30:03', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/12/2017 18:00:00', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual union all
SELECT 1 department, to_date('01/13/2017 23:12:00', 'mm/dd/yyyy hh24:mi:ss') initialdate, to_date('01/13/2017 23:59:08', 'mm/dd/yyyy hh24:mi:ss') finaldate from dual)
SELECT department,
MIN(initialdate) initialdate,
MAX(finaldate) finaldate
FROM (SELECT department,
initialdate,
finaldate,
row_number() OVER (ORDER BY initialdate)
- row_number() OVER (PARTITION BY department ORDER BY initialdate) grp
FROM sample_data sd)
GROUP BY department, grp
ORDER BY initialdate;
DEPARTMENT INITIALDATE FINALDATE
---------- ------------------- -------------------
1 01/01/2017 01:12:00 01/05/2017 02:00:00
2 01/05/2017 10:00:00 01/07/2017 04:04:00
3 01/07/2017 04:00:00 01/07/2017 15:00:22
1 01/07/2017 14:00:00 01/08/2017 22:00:00
3 01/12/2017 01:30:03 01/12/2017 18:00:00
1 01/13/2017 23:12:00 01/13/2017 23:59:08
This works by walking through and numbering all rows ordered by initial date and comparing them to walking through and numbering the rows for each department. When the department changes, the difference between the numbers changes. Where a department has consecutive rows in the initial dataset, the difference will remain the same for those rows. E.g. in your data, department 1 has 6 rows, the first 3 rows are the same as the first 3 rows of the initial data set, so the difference for those three rows is 0. The fourth and fifth department 1 rows are the 7th and 8th rows in the dataset, so the difference is 3 for those rows, etc.
This gives us a number that we can use, in conjunction with the department number, to group the data by. It's then a simple matter of finding the min/max dates within that group.

Related

How to fill date range gaps Oracle SQL

With a given dataset:
WITH ranges AS (
select to_date('01.01.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_from,
to_date('31.03.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_to
from dual
union
select to_date('27.03.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_from,
to_date('27.04.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_to
from dual
union
select to_date('01.05.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_from,
to_date('31.12.2021 00:00:00','DD.MM.YYYY hh24:mi:ss') date_to
from dual
)
SELECT * FROM ranges;
How to find the gap 28.04.2021-30.04.2021.? Also consider that there can be multiple gaps in between and ranges can overlap.
Any suggestion?
Try this query, tune to your needs:
WITH steps AS (
SELECT date_from as dt, 1 as step FROM ranges
UNION ALL
SELECT date_to as dt, -1 as step FROM ranges
)
SELECT dt as dt_from,
lead(dt) over (order by dt) as dt_to,
sum(step) over (order by dt) as cnt_ranges
FROM steps;
dt_from | dt_to | cnt_ranges
------------------------+-------------------------+-----------
2021-01-01 00:00:00.000 | 2021-03-27 00:00:00.000 | 1
2021-03-27 00:00:00.000 | 2021-03-31 00:00:00.000 | 2
2021-03-31 00:00:00.000 | 2021-04-27 00:00:00.000 | 1
2021-04-27 00:00:00.000 | 2021-05-01 00:00:00.000 | 0
2021-05-01 00:00:00.000 | 2021-12-31 00:00:00.000 | 1
2021-12-31 00:00:00.000 | | 0
You are modeling date ranges incorrectly; an interval ending at midnight on 02-14-2021, for example, should not include 02-14-2021. In your model it does.
This leads to unnecessary complications in all the queries you write against your model. In the solution below I need to add 1 to end dates first, do all the processing, and then subtract 1 at the end.
with
ranges (date_from, date_to) as (
select to_date('01.01.2021 00:00:00','DD.MM.YYYY hh24:mi:ss'),
to_date('31.03.2021 00:00:00','DD.MM.YYYY hh24:mi:ss')
from dual
union all
select to_date('27.03.2021 00:00:00','DD.MM.YYYY hh24:mi:ss'),
to_date('27.04.2021 00:00:00','DD.MM.YYYY hh24:mi:ss')
from dual
union all
select to_date('01.05.2021 00:00:00','DD.MM.YYYY hh24:mi:ss'),
to_date('31.12.2021 00:00:00','DD.MM.YYYY hh24:mi:ss')
from dual
)
select first_missing, last_missing - 1 as last_missing
from (
select dt as first_missing,
lead(df) over (order by dt) as last_missing
from (select date_from, date_to + 1 as date_to from ranges)
match_recognize(
order by date_from
measures first(date_from) as df, max(date_to) as dt
pattern (a* b)
define a as max(date_to) >= next (date_from)
)
)
where last_missing is not null
;
FIRST_MISSING LAST_MISSING
------------------- -------------------
28.04.2021 00:00:00 30.04.2021 00:00:00

Find peaks of data

So I have a table Integrations.
Inte
Start Date
End Date
Total_Duration
INT1
1/7/2021 7:16:00
1/7/2021 9:22:00
02:06:00
INt2
2/7/2021 3:48:00
2/7/2021 5:10:00
01:22:00
Output I need:
Running Time
No of Inte.
1/7/2021 7:00:00
1
1/7/2021 8:00:00
1
1/7/2021 9:00:00
1
2/7/2021 4:00:00
1
2/7/2021 5:00:00
1
Basically it want to plot the peak hour when most Integrations were running.
Sql query I wrote:
select time, sum(value) as No_of_Inte
from(
select round(Start_Date, 'HH24') as time, count(*) as value
from Integrations
group by Start_Date
)
group by time
order by time asc
But this does not consider Total Duration.
Output :
Running Time
No of Inte.
1/7/2021 7:00:00
1
2/7/2021 4:00:00
1
Also, new Integrations are added every day.
This can be done using a recursive query. First create the test data
CREATE TABLE integrations (inte,start_date, end_date)
AS
(
SELECT 'INT1', TO_DATE('1/7/2021 7:16:00','DD/MM/YYYY HH24:MI:SS'), TO_DATE('1/7/2021 9:22:00','DD/MM/YYYY HH24:MI:SS') FROM dual UNION ALL
SELECT 'INT2', TO_DATE('2/7/2021 3:48:00','DD/MM/YYYY HH24:MI:SS'), TO_DATE('2/7/2021 5:10:00','DD/MM/YYYY HH24:MI:SS') FROM dual
);
Now use a recursive query to loop through the hours between start and end date. Then group by hour to get the correct counts per hour.
WITH row_per_hours (id, run_hour, end_date) AS
(
SELECT inte,
TRUNC(start_date,'HH24'),
end_date
FROM integrations
UNION ALL
SELECT id,
run_hour + INTERVAL '1' HOUR,
end_date
FROM row_per_hours
WHERE run_hour + INTERVAL '1' HOUR < end_date
)
SELECT TO_CHAR(run_hour,'DD/MM/YYYY HH24:MI:SS') as running_time,
COUNT(id) as integration_count
FROM row_per_hours
GROUP BY TO_CHAR(run_hour,'DD/MM/YYYY HH24:MI:SS') ORDER BY 1;
RUNNING_TIME INTEGRATION_COUNT
------------------- -----------------
01/07/2021 07:00:00 1
01/07/2021 08:00:00 1
01/07/2021 09:00:00 1
02/07/2021 03:00:00 1
02/07/2021 04:00:00 1
02/07/2021 05:00:00 1
For 12C and above:
You may use lateral join to generate required number of rows per each interval. Since it looks like you need some rounding of dates towards neares hour, I've added round instead of trunc. Or is there any other reason for the first interval is treating 7:00 as inclusion?.
with a(Inte, start_dt, end_dt) as (
select
'INT1'
, to_date('1/7/2021 07:16:00', 'dd/mm/yyyy hh24:mi:ss')
, to_date('1/7/2021 09:22:00', 'dd/mm/yyyy hh24:mi:ss')
from dual union all
select
'INt2'
, to_date('2/7/2021 03:48:00', 'dd/mm/yyyy hh24:mi:ss')
, to_date('2/7/2021 05:10:00', 'dd/mm/yyyy hh24:mi:ss')
from dual
)
select /*+ gather_plan_statistics */
b.hour_
, count(1) as int_cnt
from a
outer apply (
select
round(a.start_dt + numtodsinterval(level - 1, 'HOUR'), 'hh24') as hour_
from dual
connect by round(start_dt, 'hh24') + numtodsinterval(level - 1, 'HOUR') <= trunc(end_dt, 'hh24')
) b
group by b.hour_
order by 1
HOUR_ | INT_CNT
:------------------ | ------:
2021-07-01 07:00:00 | 1
2021-07-01 08:00:00 | 1
2021-07-01 09:00:00 | 1
2021-07-02 04:00:00 | 1
2021-07-02 05:00:00 | 1
db<>fiddle here

Counting records and grouping them by the hour

I'm trying to count the records in my table and grouping them by hour, i'm getting results with my query but I want it to return every hour even if there are no records.
My current query is,
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM TABLE
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH');
This returns a result like this,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
What I want is for it to return every hour between my 2 dates so,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
0 | 23-Jan-19 09:00:00
0 | 23-Jan-19 10:00:00
0 | 23-Jan-19 11:00:00
0 | 23-Jan-19 12:00:00
0 | 23-Jan-19 13:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
--......
0 | 24-Jan-19 00:00:00
0 | 24-Jan-19 01:00:00
0 | 24-Jan-19 02:00:00
--and so on
To fill the holes in the transaction hours you create first a complete table of hours.
You may use Recursive Subquery Factoring to do it
WITH hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit here */
)
select * from hour_table;
TRANSACTIONDATE
-------------------
23.01.2019 07:00:00
23.01.2019 08:00:00
...
24.01.2019 05:00:00
24.01.2019 06:00:00
Note that you use the staring and ending date in this query, the starting date must be exact an hour.
Next step is as simple as to outer join this hour table to your aggregation and set the default value for the missing hours with NVL.
with hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit */
),
agg as (
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM "TABLE"
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH')
)
select t.TRANSACTIONDATE, nvl(transactioncount,0) transactioncount
from hour_table t
left outer join agg a
on t.TRANSACTIONDATE = a.TRANSACTIONDATE
order by 1;
You might consider using the following with CONNECT BY level logic :
SELECT sum(transactioncount) as transactioncount, transactiondate
FROM
(
with "TABLE"(date_modified) as
(
SELECT timestamp'2019-01-23 08:00:00' FROM dual union all
SELECT timestamp'2019-01-23 08:30:00' FROM dual union all
SELECT timestamp'2019-01-23 09:00:00' FROM dual union all
SELECT timestamp'2019-01-24 05:01:00' FROM dual
)
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'hh24') as transactiondate
FROM "TABLE" t
GROUP BY trunc(date_modified, 'HH24')
UNION ALL
SELECT 0, timestamp'2019-01-23 07:00:00' + ( level - 1 )/24
FROM dual
CONNECT BY level <= 24 * extract( day from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') +
extract( hour from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') + 1
)
GROUP BY transactiondate
ORDER BY transactiondate
Rextester Demo

Find periods from timestamps in ordered table

Let's assume I have following table CALLS which is sorted by column CALL of type TIMESTAMP:
CALL TYPE
--------------------- ------
31.10.2018 10:00:00 OFF
31.10.2018 11:00:00 ON
31.10.2018 12:00:00 ON
31.10.2018 13:00:00 ON
31.10.2018 14:00:00 OFF
31.10.2018 15:00:00 OFF
31.10.2018 16:00:00 ON
31.10.2018 17:00:00 ON
I want to write view that will find individual groups of calls with TYPE=ON and extract their start and end dates. As a result, for given example I get two groups:
START END
--------------------- ---------------------
31.10.2018 11:00:00 31.10.2018 13:00:00
31.10.2018 16:00:00 31.10.2018 17:00:00
We should assume:
Minimal count of group is 1, so we can get group that has the same start and end date
ON rows are seperated by OFF rows but the first and the last row don't have to be OFF type
Is it possible to achieve that in Oracle 12c?
This is a gaps-and-islands problem. In this case, a difference of row numbers with aggregation does what you want:
select min(call) as start_time, max(call) as end_time
from (select t.*,
row_number() over (partition by type order by call) as seqnum_t,
row_number() over (order by call) as seqnum
from t
) t
where type = 'ON'
group by (seqnum - seqnum_t)
If you run Oracle 12 then you can use also the SQL for Pattern Matching
Would be like this:
WITH t (CALL, TYPE) AS (
SELECT TO_TIMESTAMP('31.10.2018 10:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'OFF' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 11:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'ON' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 12:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'ON' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 13:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'ON' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 14:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'OFF' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 15:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'OFF' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 16:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'ON' FROM dual UNION ALL
SELECT TO_TIMESTAMP('31.10.2018 17:00:00', 'dd.mm.yyyy hh24:mi:ss'), 'ON' FROM dual)
SELECT *
FROM t
MATCH_RECOGNIZE (
ORDER BY CALL
MEASURES
FINAL MIN(CALL) AS CALL_START,
FINAL MAX(CALL) AS CALL_END
PATTERN ( CALL_ON+ )
DEFINE
CALL_ON AS TYPE = 'ON'
);
+-----------------------------------------------------------+
| CALL_START | CALL_END |
+-----------------------------------------------------------+
| 31.10.2018 11:00:00.000 | 31.10.2018 13:00:00.000 |
| 31.10.2018 16:00:00.000 | 31.10.2018 17:00:00.000 |
+-----------------------------------------------------------+

SQL , Analytical Functions , rownumber

I need to get same rownumber or numeric value in SQL to group values that match conditions like the following example:
If we have same Agent name and the time variance between current row and preceding row value is less than 06:00 hours after applying partition by name and ordering by time
then add same rownumber else increase it.
example for row data and output of rownumber:
person date_time rownumber
A 01/04/2018 10:00 1
A 01/04/2018 13:00 1
A 01/04/2018 14:00 1
A 01/04/2018 15:00 1
A 01/04/2018 23:00 2
A 02/04/2018 03:00 2
A 02/04/2018 12:00 3
A 02/04/2018 16:00 3
B 01/04/2018 17:00 4
B 01/04/2018 20:30 4
C 01/04/2018 18:00 5
C 01/04/2018 22:00 5
You can do this with a combination of LAG and SUM analytic functions, like so:
WITH your_table AS (SELECT 'A' person, to_date('01/04/2018 10', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('01/04/2018 13', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('01/04/2018 14', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('01/04/2018 15', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('01/04/2018 23', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('02/04/2018 03', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('02/04/2018 12', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'A' person, to_date('02/04/2018 16', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'B' person, to_date('01/04/2018 17', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'B' person, to_date('01/04/2018 20', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'C' person, to_date('01/04/2018 18', 'dd/mm/yyyy hh24') date_time FROM dual UNION ALL
SELECT 'C' person, to_date('01/04/2018 22', 'dd/mm/yyyy hh24') date_time FROM dual)
SELECT person,
date_time,
SUM(period_change) OVER (ORDER BY person, date_time) rownumber
FROM (SELECT person,
date_time,
CASE WHEN date_time - LAG(date_time, 1, date_time - 7/24) OVER (PARTITION BY person ORDER BY date_time) > 6/24 THEN 1 ELSE 0 END period_change
FROM your_table);
PERSON DATE_TIME ROWNUMBER
------ ----------- ----------
A 01/04/2018 1
A 01/04/2018 1
A 01/04/2018 1
A 01/04/2018 1
A 01/04/2018 2
A 02/04/2018 2
A 02/04/2018 3
A 02/04/2018 3
B 01/04/2018 4
B 01/04/2018 4
C 01/04/2018 5
C 01/04/2018 5
This works by putting 1 in the additional column whenever a new group is triggered.
Once you have that, then you can do a running sum on that column. That means that after every group change, subsequent rows will be assigned the same number, up until the next group change.
N.B. As suggested by Peter Lang in the comments below, you might prefer to change the case statement generating the "period_change" column to:
CASE WHEN date_time - LAG(date_time) OVER (PARTITION BY person ORDER BY date_time) < 6/24 THEN 0 ELSE 1 END