How can I generate a weekly series which should have start date as month start date and end date should be start date + 6 in postgresql - sql

Below is the query I have tried:
with weeks as (
select d::date start_date, case when to_char(d::date+6,'Month') =
to_char(d::date,'Month') then d::date+ 6 else (date_trunc('month',
to_timestamp(to_char(d::date,'YYYY-MM-DD') ,'YYYY-MM-DD HH:MI:SS')) + interval '1
month - 1 day')::date end as end_date
from generate_series('2022-01-01', '2022-07-30', '7d'::interval) d
)
I want results as:
Start Date
End Date
'2022-01-01'
'2022-01-07
'2022-01-08'
'2022-01-14
And it should go from 2022-01-29 to 2022-01-31 then for next month it should start from '2022-02-01' to '2022-02-07' and should continue.

I would combine to generate_series() here. One for the start of the months, and one for the week starts.
Evaluating the week end can be achieved using the least() function by calculating the last day of the month:
select gw.week_start::date start_date,
least(gw.week_start::date + 6, date_trunc('month', gw.week_start) + interval '1 month - 1 day')::date as end_date
from generate_series('2022-01-01', '2022-07-30', interval '1 month') as gm(month_start)
cross join generate_series(gm.month_start, gm.month_start + interval '1 month - 1 day', interval '1 week') as gw(week_start);
This returns:
start_date | end_date
-----------+-----------
2022-01-01 | 2022-01-07
2022-01-08 | 2022-01-14
2022-01-15 | 2022-01-21
2022-01-22 | 2022-01-28
2022-01-29 | 2022-01-31
2022-02-01 | 2022-02-07
2022-02-08 | 2022-02-14
2022-02-15 | 2022-02-21
2022-02-22 | 2022-02-28
2022-03-01 | 2022-03-07
2022-03-08 | 2022-03-14
2022-03-15 | 2022-03-21
2022-03-22 | 2022-03-28
2022-03-29 | 2022-03-31
2022-04-01 | 2022-04-07
2022-04-08 | 2022-04-14
2022-04-15 | 2022-04-21
2022-04-22 | 2022-04-28
2022-04-29 | 2022-04-30
2022-05-01 | 2022-05-07
2022-05-08 | 2022-05-14
2022-05-15 | 2022-05-21
2022-05-22 | 2022-05-28
2022-05-29 | 2022-05-31
2022-06-01 | 2022-06-07
2022-06-08 | 2022-06-14
2022-06-15 | 2022-06-21
2022-06-22 | 2022-06-28
2022-06-29 | 2022-06-30
2022-07-01 | 2022-07-07
2022-07-08 | 2022-07-14
2022-07-15 | 2022-07-21
2022-07-22 | 2022-07-28
2022-07-29 | 2022-07-31

Related

SQL-Aggregate Timeseries Table (HourOfDay, Val) to Average Value of HourOfDay by Weeekday (fi. Avg of Mondays 10:00-11:00, 11:00-12:00,...,Tue...)

So far I made an SQL query that provides me with a table containing the amount of customers handled for each hour of the day - given a arbitrary start and an end datetime value (from Grafana interface). The result might be over many weeks. My goal is to implement an hourly heatmap by weekday with averaged values.
How do I aggregate those customer per hour to show the average value of that hours per weekday?
So let's say I got 24 values per day over 19 days. How do I aggregate so I get 24 values for each mon, tue, wed, thu, fri, sat, sun - each hour representing the average value for those days?
Also only use data of full weeks, so strip leading and trailing days, that are not part of a fully represented week (so same amount of individual weekdays representing an average value).
Here is a segment on how the return of my SQL query looks so far. (hour of each day, number of customers):
...
2021-12-13 11:00:00 | 0
2021-12-13 12:00:00 | 3
2021-12-13 13:00:00 | 4
2021-12-13 14:00:00 | 4
2021-12-13 15:00:00 | 7
2021-12-13 16:00:00 | 17
2021-12-13 17:00:00 | 12
2021-12-13 18:00:00 | 18
2021-12-13 19:00:00 | 15
2021-12-13 20:00:00 | 8
2021-12-13 21:00:00 | 10
2021-12-13 22:00:00 | 1
2021-12-13 23:00:00 | 0
2021-12-14 00:00:00 | 0
2021-12-14 01:00:00 | 0
2021-12-14 02:00:00 | 0
2021-12-14 03:00:00 | 0
2021-12-14 04:00:00 | 0
2021-12-14 05:00:00 | 0
2021-12-14 06:00:00 | 0
2021-12-14 07:00:00 | 0
2021-12-14 08:00:00 | 0
2021-12-14 09:00:00 | 0
2021-12-14 10:00:00 | 12
2021-12-14 11:00:00 | 12
2021-12-14 12:00:00 | 19
2021-12-14 13:00:00 | 11
2021-12-14 14:00:00 | 11
2021-12-14 15:00:00 | 12
2021-12-14 16:00:00 | 9
2021-12-14 17:00:00 | 2
...
So (schematically, example data) startDate 2021-12-10 11:00 to endDate 2021-12-31 17:00
-------------------------------
...
Mon 2021-12-13 12:00 | 3
Mon 2021-12-13 13:00 | 4
Mon 2021-12-13 14:00 | 4
...
Mon 2021-12-20 12:00 | 1
Mon 2021-12-20 13:00 | 6
Mon 2021-12-20 13:00 | 2
...
Mon 2021-12-27 12:00 | 2
Mon 2021-12-27 13:00 | 2
Mon 2021-12-27 13:00 | 3
...
-------------------------------
into this:
strip leading fri 10., sat 11., sun 12.
strip trailing tue 28., wen 29., thu 30., fri 31.
average hours per weekday
-------------------------------
...
Mon 12:00 | 2
Mon 13:00 | 4
Mon 14:00 | 3
...
Tue 12:00 | x
Tue 13:00 | y
Tue 13:00 | z
...
-------------------------------
My approach so far:
WITH CustomersPerHour as (
SELECT dateadd(hour, datediff(hour, 0, Systemdatum),0) as DayHour, Count(*) as C
FROM CustomerList
WHERE CustomerID > 0
AND Datum BETWEEN '2021-12-010T11:00:00Z' AND '2021-12-31T17:00:00Z'
AND EntryID IN (62,65)
AND CustomerID IN (SELECT * FROM udf_getActiveUsers())
GROUP BY dateadd(hour, datediff(hour, 0, Systemdatum), 0)
)
-- add null values on missing data/insert missing hours
SELECT DATEDIFF(second, '1970-01-01', dt.Date) AS time, C as Customers
FROM dbo.udf_generateHoursTable('2021-12-03T18:14:56Z', '2022-03-13T18:14:56Z') as dt
LEFT JOIN CustomersPerHour cPh ON dt.Date = cPh.DayHour
ORDER BY
time ASC
Hi simpliest solution is just do what you have written in example. Create custom base for aggregation.
So first step is to prepare your data in aggregated table with Date & Hour precision & customer count.
Then create base.
This is example of basic idea:
-- EXAMPLE
SELECT
DATENAME(WEEKDAY, GETDATE()) + ' ' + CAST(DATEPART(HOUR, GETDATE()) + ':00' AS varchar(8))
-- OUTPUT: Sunday 21:00
You can concatenate data and then use it in GROUP BY clause.
Adjust this query for your use case:
SELECT
DATENAME(WEEKDAY, <DATETIME_COL>) + ' ' + CAST(DATEPART(HOUR, <DATETIME_COL>) AS varchar(8)) + ':00' as base
,SUM(...) as sum_of_whatever
,AVG(...) as avg_of_whatever
FROM <YOUR_AGG_TABLE>
GROUP BY DATENAME(WEEKDAY, <DATETIME_COL>) + ' ' + CAST(DATEPART(HOUR, <DATETIME_COL>) AS varchar(8)) + ':00'
This create base exactly as you wanted.
You can use this logic to create other desired agg. bases.

PostgreSQL count max number of concurrent user sessions per hour

Situation
We have a PostgreSQL 9.1 database containing user sessions with login date/time and logout date/time per row. Table looks like this:
user_id | login_ts | logout_ts
------------+--------------+--------------------------------
USER1 | 2021-02-03 09:23:00 | 2021-02-03 11:44:00
USER2 | 2021-02-03 10:49:00 | 2021-02-03 13:30:00
USER3 | 2021-02-03 13:32:00 | 2021-02-03 15:31:00
USER4 | 2021-02-04 13:50:00 | 2021-02-04 14:53:00
USER5 | 2021-02-04 14:44:00 | 2021-02-04 15:21:00
USER6 | 2021-02-04 14:52:00 | 2021-02-04 17:59:00
Goal
Would like to get the max number of concurrent users for each 24 hours of each day in the time range. Like this:
date | hour | sessions
-----------+-------+-----------
2021-02-03 | 01:00 | 0
2021-02-03 | 02:00 | 0
2021-02-03 | 03:00 | 0
2021-02-03 | 04:00 | 0
2021-02-03 | 05:00 | 0
2021-02-03 | 06:00 | 0
2021-02-03 | 07:00 | 0
2021-02-03 | 08:00 | 0
2021-02-03 | 09:00 | 1
2021-02-03 | 10:00 | 2
2021-02-03 | 11:00 | 2
2021-02-03 | 12:00 | 1
2021-02-03 | 13:00 | 1
2021-02-03 | 14:00 | 1
2021-02-03 | 15:00 | 0
2021-02-03 | 16:00 | 0
2021-02-03 | 17:00 | 0
2021-02-03 | 18:00 | 0
2021-02-03 | 19:00 | 0
2021-02-03 | 20:00 | 0
2021-02-03 | 21:00 | 0
2021-02-03 | 22:00 | 0
2021-02-03 | 23:00 | 0
2021-02-03 | 24:00 | 0
2021-02-04 | 01:00 | 0
2021-02-04 | 02:00 | 0
2021-02-04 | 03:00 | 0
2021-02-04 | 04:00 | 0
2021-02-04 | 05:00 | 0
2021-02-04 | 06:00 | 0
2021-02-04 | 07:00 | 0
2021-02-04 | 08:00 | 0
2021-02-04 | 09:00 | 0
2021-02-04 | 10:00 | 0
2021-02-04 | 11:00 | 0
2021-02-04 | 12:00 | 0
2021-02-04 | 13:00 | 1
2021-02-04 | 14:00 | 3
2021-02-04 | 15:00 | 1
2021-02-04 | 16:00 | 1
2021-02-04 | 17:00 | 1
2021-02-04 | 18:00 | 0
2021-02-04 | 19:00 | 0
2021-02-04 | 20:00 | 0
2021-02-04 | 21:00 | 0
2021-02-04 | 22:00 | 0
2021-02-04 | 23:00 | 0
2021-02-04 | 24:00 | 0
Considerations
"Concurrent" means at the same point in time. Thus user2 and user3 do not overlap for
13:00, but user4 and user6 do overlap for 14:00 even though they only overlap for 1 minute.
User sessions can span multiple hours and would thus count for each hour they are part of.
Each user can only be online once at one point in time.
If there are no users for a particular hour, this should return 0.
Similar questions
A similar question was answered here: Count max. number of concurrent user sessions per day by Erwin Brandstetter. However, this is per day rather than per hour, and I am apparently too much of a noob at postgreSQL to be able to translate it into hourly so I'm hoping someone can help.
I would decompose this into two problems:
Find the number of overlaps and when they begin and end.
Find the hours.
Note two things:
I am assuming that '2014-04-03 17:59:00' is a typo.
The following goes by the beginning of the hour and puts the date/hour in a single column.
First, calculate the overlaps. For this, unpivot the logins and logout. Put in a counter of +1 for logins and -1 for logouts and do a cumulative sum. This looks like:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select *
from overlap
order by ts;
For the next step, use generate_series() to generate timestamps one hour apart. Look for the maximum value during that period using left join and group by:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select gs.hh, coalesce(max(o.num_overlaps), 0) as num_overlaps
from generate_series('2021-02-03'::date, '2021-02-05'::date, interval '1 hour') gs(hh) left join
overlap o
on o.ts < gs.hh + interval '1 hour' and
o.next_ts > gs.hh
group by gs.hh
order by gs.hh;
Here is a db<>fiddle using your data fixed with the a reasonable logout time for the last record.
For any time period you can calculate number of concurrent sesions using OVERLAPS operator in SQL:
CREATE TEMP TABLE sessions (
user_id text not null,
login_ts timestamp,
logout_ts timestamp );
INSERT INTO sessions SELECT 'webuser', d,
d+((1+random()*300)::text||' seconds')::interval
FROM generate_series(
'2021-02-28 07:42'::timestamp,
'2021-03-01 07:42'::timestamp,
'5 seconds'::interval) AS d;
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE (s2.login_ts, s2.logout_ts) OVERLAPS (s1.login_ts, s1.logout_ts))
AS parallel_sessions
FROM sessions s1 LIMIT 10;
user_id | login_ts | logout_ts | parallel_sessions
---------+---------------------+----------------------------+------------------
webuser | 2021-02-28 07:42:00 | 2021-02-28 07:42:25.528594 | 6
webuser | 2021-02-28 07:42:05 | 2021-02-28 07:45:50.513769 | 47
webuser | 2021-02-28 07:42:10 | 2021-02-28 07:44:18.810066 | 28
webuser | 2021-02-28 07:42:15 | 2021-02-28 07:45:17.3888 | 40
webuser | 2021-02-28 07:42:20 | 2021-02-28 07:43:14.325476 | 15
webuser | 2021-02-28 07:42:25 | 2021-02-28 07:43:44.174841 | 21
webuser | 2021-02-28 07:42:30 | 2021-02-28 07:43:32.679052 | 18
webuser | 2021-02-28 07:42:35 | 2021-02-28 07:45:12.554117 | 38
webuser | 2021-02-28 07:42:40 | 2021-02-28 07:46:37.94311 | 55
webuser | 2021-02-28 07:42:45 | 2021-02-28 07:43:08.398444 | 13
(10 rows)
This work well on small data sets but for better performance, use PostgreSQL Range Types as below. This works on postgres 9.2 and later.
ALTER TABLE sessions ADD timerange tsrange;
UPDATE sessions SET timerange = tsrange(login_ts,logout_ts);
CREATE INDEX ON sessions USING gist (timerange);
CREATE TEMP TABLE level1 AS
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE s2.timerange && s1.timerange) AS parallel_sessions
FROM sessions s1;
SELECT date_trunc('hour',login_ts) AS hour, count(*),
max(parallel_sessions)
FROM level1
GROUP BY hour;
hour | count | max
---------------------+-------+-----
2021-02-28 14:00:00 | 720 | 98
2021-03-01 03:00:00 | 720 | 99
2021-03-01 06:00:00 | 720 | 94
2021-02-28 09:00:00 | 720 | 96
2021-02-28 10:00:00 | 720 | 97
2021-02-28 18:00:00 | 720 | 94
2021-02-28 11:00:00 | 720 | 97
2021-03-01 00:00:00 | 720 | 97
2021-02-28 19:00:00 | 720 | 99
2021-02-28 16:00:00 | 720 | 94
2021-02-28 17:00:00 | 720 | 95
2021-03-01 02:00:00 | 720 | 99
2021-02-28 08:00:00 | 720 | 96
2021-02-28 23:00:00 | 720 | 94
2021-03-01 07:00:00 | 505 | 92
2021-03-01 04:00:00 | 720 | 95
2021-02-28 21:00:00 | 720 | 97
2021-03-01 01:00:00 | 720 | 93
2021-02-28 22:00:00 | 720 | 96
2021-03-01 05:00:00 | 720 | 93
2021-02-28 20:00:00 | 720 | 95
2021-02-28 13:00:00 | 720 | 95
2021-02-28 12:00:00 | 720 | 97
2021-02-28 15:00:00 | 720 | 98
2021-02-28 07:00:00 | 216 | 93
(25 rows)

How to generate series for date range with minutes interval in oracle?

In Postgres below query is working using generate_series function
SELECT dates
FROM generate_series(CAST('2019-03-01' as TIMESTAMP), CAST('2019-04-01' as TIMESTAMP), interval '30 mins') AS dates
Below query is also working in Oracle but only for date interval
select to_date('2019-03-01','YYYY-MM-DD') + rownum -1 as dates
from all_objects
where rownum <= to_date('2019-03-06','YYYY-MM-DD')-to_date('2019-03-01','YYYY-MM-DD')+1
SELECT dates
FROM generate_series(CAST('2019-03-01' as TIMESTAMP), CAST('2019-04-01' as TIMESTAMP), interval '30 mins') AS dates
I want same result in Oracle for below query
SELECT dates
FROM generate_series(CAST('2019-03-01' as TIMESTAMP), CAST('2019-04-01' as TIMESTAMP), interval '30 mins') AS dates
Use a hierarchical query:
SELECT DATE '2019-03-01' + ( LEVEL - 1 ) * INTERVAL '30' MINUTE AS dates
FROM DUAL
CONNECT BY DATE '2019-03-01' + ( LEVEL - 1 ) * INTERVAL '30' MINUTE <= DATE '2019-04-01';
Output:
| DATES |
| :------------------ |
| 2019-03-01 00:00:00 |
| 2019-03-01 00:30:00 |
| 2019-03-01 01:00:00 |
| 2019-03-01 01:30:00 |
| 2019-03-01 02:00:00 |
| 2019-03-01 02:30:00 |
| 2019-03-01 03:00:00 |
| 2019-03-01 03:30:00 |
| 2019-03-01 04:00:00 |
| 2019-03-01 04:30:00 |
| 2019-03-01 05:00:00 |
| 2019-03-01 05:30:00 |
...
| 2019-03-31 19:30:00 |
| 2019-03-31 20:00:00 |
| 2019-03-31 20:30:00 |
| 2019-03-31 21:00:00 |
| 2019-03-31 21:30:00 |
| 2019-03-31 22:00:00 |
| 2019-03-31 22:30:00 |
| 2019-03-31 23:00:00 |
| 2019-03-31 23:30:00 |
| 2019-04-01 00:00:00 |
db<>fiddle here

Get current week in postgreSQL

I have been searching the web for the proper postgreSQL syntax for current_week. I searched through the link attached but could not get anything fruition out of it Date/Time. My task is to get Sunday as the start of the week.
I tried same as current_date but it failed:
select current_week
There has to be a current week syntax for postgreSQL.
knowing that for extract('dow' from
The day of the week as Sunday (0) to Saturday (6)
and
By definition, ISO weeks start on Mondays
You can workaround by substracting one day:
select date_trunc('week', current_date) - interval '1 day' as current_week
current_week
------------------------
2016-12-18 00:00:00+00
(1 row)
Here is sample:
t=# with d as (select generate_series('2016-12-11','2016-12-28','1 day'::interval) t)
select date_trunc('week', d.t)::date - interval '1 day' as current_week, extract('dow' from d.t), d.t from d
;
current_week | date_part | t
---------------------+-----------+------------------------
2016-12-04 00:00:00 | 0 | 2016-12-11 00:00:00+00
2016-12-11 00:00:00 | 1 | 2016-12-12 00:00:00+00
2016-12-11 00:00:00 | 2 | 2016-12-13 00:00:00+00
2016-12-11 00:00:00 | 3 | 2016-12-14 00:00:00+00
2016-12-11 00:00:00 | 4 | 2016-12-15 00:00:00+00
2016-12-11 00:00:00 | 5 | 2016-12-16 00:00:00+00
2016-12-11 00:00:00 | 6 | 2016-12-17 00:00:00+00
2016-12-11 00:00:00 | 0 | 2016-12-18 00:00:00+00
2016-12-18 00:00:00 | 1 | 2016-12-19 00:00:00+00
2016-12-18 00:00:00 | 2 | 2016-12-20 00:00:00+00
2016-12-18 00:00:00 | 3 | 2016-12-21 00:00:00+00
2016-12-18 00:00:00 | 4 | 2016-12-22 00:00:00+00
2016-12-18 00:00:00 | 5 | 2016-12-23 00:00:00+00
2016-12-18 00:00:00 | 6 | 2016-12-24 00:00:00+00
2016-12-18 00:00:00 | 0 | 2016-12-25 00:00:00+00
2016-12-25 00:00:00 | 1 | 2016-12-26 00:00:00+00
2016-12-25 00:00:00 | 2 | 2016-12-27 00:00:00+00
2016-12-25 00:00:00 | 3 | 2016-12-28 00:00:00+00
(18 rows)
Time: 0.483 ms
One method would be date_trunc():
select date_trunc('week', current_date) as current_week

Group by an individual timeframe

I would like to group rows of a table by an individual time frame.
As an example let's imagine we have a list of departures at an airport:
| Departure | Flight | Destination |
| 2016-06-01 10:12:00 | LH1234 | New York |
| 2016-06-02 14:23:00 | LH1235 | Berlin |
| 2016-06-02 14:30:00 | LH1236 | Tokio |
| 2016-06-03 18:45:00 | LH1237 | Belgrad |
| 2016-06-04 04:10:00 | LH1237 | Rio |
| 2016-06-04 06:20:00 | LH1237 | Paris |
I can easily group the data by full hours (days, weeks, ...) using the following query:
select to_char(departure, 'HH24') as "full hour", count(*) as "number flights"
from departures
group by to_char(departure, 'HH24')
This should result in the following table.
| full hour | number flights |
| 04 | 1 |
| 06 | 1 |
| 10 | 1 |
| 14 | 2 |
| 18 | 1 |
Now my question: Is there an elegant way (or best practise) to group data by an individual time frame.
The result I'm looking for is the following:
| time frame | number flights |
| 2016-05-31 22:00 - 2016-06-01 06:00 | 0 |
| 2016-06-01 06:00 - 2016-06-01 14:00 | 1 |
| 2016-06-01 14:00 - 2016-06-01 22:00 | 0 |
| 2016-06-01 22:00 - 2016-06-02 06:00 | 0 |
| 2016-06-02 06:00 - 2016-06-02 14:00 | 0 |
| 2016-06-02 14:00 - 2016-06-02 22:00 | 2 |
| 2016-06-02 22:00 - 2016-06-03 06:00 | 0 |
| 2016-06-03 06:00 - 2016-06-03 14:00 | 0 |
| 2016-06-03 14:00 - 2016-06-03 22:00 | 1 |
| 2016-06-03 22:00 - 2016-06-04 06:00 | 1 |
| 2016-06-04 06:00 - 2016-06-04 14:00 | 1 |
| 2016-06-04 14:00 - 2016-06-04 22:00 | 0 |
| 2016-06-04 22:00 - 2016-06-05 06:00 | 0 |
(The rows with 0 flights aren't relevant. They are just there for a better visualization of the problem.)
Thanks for your answers in advance. :-)
Peter
Since you have groups starting at 22:00 and multiples of 8 hours afterwards then you can use TRUNC() and an offset of 2 hours to get the results grouped by each day.
You can then work out the which third of the day the departure is in and also group by that:
GROUP BY TRUNC( Departure + 2/24 ),
FLOOR( ( Departure + 2/24 - TRUNC( Departure + 2/24 ) ) * 3 )
Something like this should work. Please note the two input variables, first_time and timespan. The timespan is whatever you want it to be (I wrote it in the form 8/24 for eight hours; if you make timespan into a bind variable as a number expressed in HOURS, you need the division by 24). Due to the way I wrote the formulas, there are NO requirements on first_time other than it should be one of your boundary date/times; it may even be in the future, it won't change the results. It may also be made into a bind variable, then you can decide in what format you want it to be made available to the query.
with timetable (departure, flight, destination) as (
select to_date('2016-06-01 10:12:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1234', 'New York'
from dual union all
select to_date('2016-06-02 14:23:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1235', 'Berlin'
from dual union all
select to_date('2016-06-02 14:30:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1236', 'Tokyo'
from dual union all
select to_date('2016-06-03 18:45:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1237', 'Belgrad'
from dual union all
select to_date('2016-06-04 04:10:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1237', 'Rio'
from dual union all
select to_date('2016-06-04 06:20:00', 'yyyy-mm-dd hh24:mi:ss'), 'LH1237', 'Paris'
from dual
),
input_values (first_time, timespan) as (
select to_date('2010-01-01 06:00:00', 'yyyy-mm-dd hh24:mi:ss'), 8/24 from dual
),
prep (adj_departure, flight, destination) as (
select first_time + timespan * floor((departure - first_time) / timespan),
flight, destination
from timetable, input_values
)
select to_char(adj_departure, 'yyyy-mm-dd hh24:mi:ss') || ' - ' ||
to_char(adj_departure + timespan, 'yyyy-mm-dd hh24:mi:ss') as time_interval,
count(*) as ct
from prep, input_values
group by adj_departure, timespan
order by adj_departure
;
Output:
TIME_INTERVAL CT
----------------------------------------- ----------
2016-06-01 06:00:00 - 2016-06-01 14:00:00 1
2016-06-02 14:00:00 - 2016-06-02 22:00:00 2
2016-06-03 14:00:00 - 2016-06-03 22:00:00 1
2016-06-03 22:00:00 - 2016-06-04 06:00:00 1
2016-06-04 06:00:00 - 2016-06-04 14:00:00 1