SQL - Split open & Close time Into intervals of 30 minutes - sql

Purpose: I work in Hospitality Industry. I want to understand at what time the Restaurant is full and what time it is less busy. I have the opening and closing times, I want to split it 30 minute interval period.
I would really appreciate if you could ease help me.
Thanking you in advance
Table
Check# Open CloseTime
25484 17:34 18:06
25488 18:04 21:22
Output
Check# Open Close Duration
25484 17:34 18:00 0:25
25484 18:00 18:30 0:30
25488 18:08 18:30 0:21
25488 18:30 19:00 0:30
25488 19:00 19:30 0:30
25488 19:30 20:00 0:30
25488 20:00 20:30 0:30
25488 20:30 21:00 0:30
25488 21:00 21:30 0:30
I am new to SQL. I am good at Excel, but due to its limitations i want to use SQL. I just know the basics in SQL.
I have tried on the google, but could not find solution to it. All i can see use of Date Keywords, but not the Field name in the code, hence i am unable to use them.

Could you try this, it works in MySQL 8.0:
WITH RECURSIVE times AS (
SELECT time '0:00' AS `Open`, time '0:30' as `Close`
UNION ALL
SELECT addtime(`Open`, '0:30'), addtime(`Close`, '0:30')
FROM times
WHERE `Open` < time '23:30'
)
SELECT c.`Check`,
greatest(t.`Open`, c.`Open`) `Open`,
least(t.`Close`, c.`CloseTime`) `Close`,
timediff(least(t.`Close`, c.`CloseTime`), greatest(t.`Open`, c.`Open`)) `Duration`
FROM times t
JOIN checks c ON (c.`Open` < t.`Close` AND c.`CloseTime` > t.`Open`);
| Check | Open | Close | Duration |
| ----- | -------- | -------- | -------- |
| 25484 | 17:34:00 | 18:00:00 | 00:26:00 |
| 25484 | 18:00:00 | 18:06:00 | 00:06:00 |
| 25488 | 18:04:00 | 18:30:00 | 00:26:00 |
| 25488 | 18:30:00 | 19:00:00 | 00:30:00 |
| 25488 | 19:00:00 | 19:30:00 | 00:30:00 |
| 25488 | 19:30:00 | 20:00:00 | 00:30:00 |
| 25488 | 20:00:00 | 20:30:00 | 00:30:00 |
| 25488 | 20:30:00 | 21:00:00 | 00:30:00 |
| 25488 | 21:00:00 | 21:22:00 | 00:22:00 |
->Fiddle

This works for SQL Server 2019:
WITH times([Open], [Close]) AS (
SELECT cast({t'00:00:00'} as time) as "Open",
cast({t'00:30:00'} as time) as "Close"
UNION ALL
SELECT dateadd(minute, 30, [Open]), dateadd(minute, 30, [Close])
FROM times
WHERE [Open] < cast({t'23:30:00'} as time)
)
SELECT c.[Check],
iif(t.[Open] > c.[Open], t.[Open], c.[Open]) as [Open],
iif(t.[Close] < c.[CloseTime], t.[Close], c.[CloseTime]) as [Close],
datediff(minute,
iif(t.[Open] > c.[Open], t.[Open], c.[Open]),
iif(t.[Close] < c.[CloseTime], t.[Close], c.[CloseTime])) Duration
FROM times t
JOIN checks c ON (c.[Open] < t.[Close] AND c.[CloseTime] > t.[Open]);
Check | Open | Close | Duration
25484 | 17:34:00.0000000 | 18:00:00.0000000 | 26
25484 | 18:00:00.0000000 | 18:06:00.0000000 | 6
25488 | 18:04:00.0000000 | 18:30:00.0000000 | 26
25488 | 18:30:00.0000000 | 19:00:00.0000000 | 30
25488 | 19:00:00.0000000 | 19:30:00.0000000 | 30
25488 | 19:30:00.0000000 | 20:00:00.0000000 | 30
25488 | 20:00:00.0000000 | 20:30:00.0000000 | 30
25488 | 20:30:00.0000000 | 21:00:00.0000000 | 30
25488 | 21:00:00.0000000 | 21:22:00.0000000 | 22
->Fiddle

Related

PostgreSQL count max number of concurrent user sessions per hour

Situation
We have a PostgreSQL 9.1 database containing user sessions with login date/time and logout date/time per row. Table looks like this:
user_id | login_ts | logout_ts
------------+--------------+--------------------------------
USER1 | 2021-02-03 09:23:00 | 2021-02-03 11:44:00
USER2 | 2021-02-03 10:49:00 | 2021-02-03 13:30:00
USER3 | 2021-02-03 13:32:00 | 2021-02-03 15:31:00
USER4 | 2021-02-04 13:50:00 | 2021-02-04 14:53:00
USER5 | 2021-02-04 14:44:00 | 2021-02-04 15:21:00
USER6 | 2021-02-04 14:52:00 | 2021-02-04 17:59:00
Goal
Would like to get the max number of concurrent users for each 24 hours of each day in the time range. Like this:
date | hour | sessions
-----------+-------+-----------
2021-02-03 | 01:00 | 0
2021-02-03 | 02:00 | 0
2021-02-03 | 03:00 | 0
2021-02-03 | 04:00 | 0
2021-02-03 | 05:00 | 0
2021-02-03 | 06:00 | 0
2021-02-03 | 07:00 | 0
2021-02-03 | 08:00 | 0
2021-02-03 | 09:00 | 1
2021-02-03 | 10:00 | 2
2021-02-03 | 11:00 | 2
2021-02-03 | 12:00 | 1
2021-02-03 | 13:00 | 1
2021-02-03 | 14:00 | 1
2021-02-03 | 15:00 | 0
2021-02-03 | 16:00 | 0
2021-02-03 | 17:00 | 0
2021-02-03 | 18:00 | 0
2021-02-03 | 19:00 | 0
2021-02-03 | 20:00 | 0
2021-02-03 | 21:00 | 0
2021-02-03 | 22:00 | 0
2021-02-03 | 23:00 | 0
2021-02-03 | 24:00 | 0
2021-02-04 | 01:00 | 0
2021-02-04 | 02:00 | 0
2021-02-04 | 03:00 | 0
2021-02-04 | 04:00 | 0
2021-02-04 | 05:00 | 0
2021-02-04 | 06:00 | 0
2021-02-04 | 07:00 | 0
2021-02-04 | 08:00 | 0
2021-02-04 | 09:00 | 0
2021-02-04 | 10:00 | 0
2021-02-04 | 11:00 | 0
2021-02-04 | 12:00 | 0
2021-02-04 | 13:00 | 1
2021-02-04 | 14:00 | 3
2021-02-04 | 15:00 | 1
2021-02-04 | 16:00 | 1
2021-02-04 | 17:00 | 1
2021-02-04 | 18:00 | 0
2021-02-04 | 19:00 | 0
2021-02-04 | 20:00 | 0
2021-02-04 | 21:00 | 0
2021-02-04 | 22:00 | 0
2021-02-04 | 23:00 | 0
2021-02-04 | 24:00 | 0
Considerations
"Concurrent" means at the same point in time. Thus user2 and user3 do not overlap for
13:00, but user4 and user6 do overlap for 14:00 even though they only overlap for 1 minute.
User sessions can span multiple hours and would thus count for each hour they are part of.
Each user can only be online once at one point in time.
If there are no users for a particular hour, this should return 0.
Similar questions
A similar question was answered here: Count max. number of concurrent user sessions per day by Erwin Brandstetter. However, this is per day rather than per hour, and I am apparently too much of a noob at postgreSQL to be able to translate it into hourly so I'm hoping someone can help.
I would decompose this into two problems:
Find the number of overlaps and when they begin and end.
Find the hours.
Note two things:
I am assuming that '2014-04-03 17:59:00' is a typo.
The following goes by the beginning of the hour and puts the date/hour in a single column.
First, calculate the overlaps. For this, unpivot the logins and logout. Put in a counter of +1 for logins and -1 for logouts and do a cumulative sum. This looks like:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select *
from overlap
order by ts;
For the next step, use generate_series() to generate timestamps one hour apart. Look for the maximum value during that period using left join and group by:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select gs.hh, coalesce(max(o.num_overlaps), 0) as num_overlaps
from generate_series('2021-02-03'::date, '2021-02-05'::date, interval '1 hour') gs(hh) left join
overlap o
on o.ts < gs.hh + interval '1 hour' and
o.next_ts > gs.hh
group by gs.hh
order by gs.hh;
Here is a db<>fiddle using your data fixed with the a reasonable logout time for the last record.
For any time period you can calculate number of concurrent sesions using OVERLAPS operator in SQL:
CREATE TEMP TABLE sessions (
user_id text not null,
login_ts timestamp,
logout_ts timestamp );
INSERT INTO sessions SELECT 'webuser', d,
d+((1+random()*300)::text||' seconds')::interval
FROM generate_series(
'2021-02-28 07:42'::timestamp,
'2021-03-01 07:42'::timestamp,
'5 seconds'::interval) AS d;
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE (s2.login_ts, s2.logout_ts) OVERLAPS (s1.login_ts, s1.logout_ts))
AS parallel_sessions
FROM sessions s1 LIMIT 10;
user_id | login_ts | logout_ts | parallel_sessions
---------+---------------------+----------------------------+------------------
webuser | 2021-02-28 07:42:00 | 2021-02-28 07:42:25.528594 | 6
webuser | 2021-02-28 07:42:05 | 2021-02-28 07:45:50.513769 | 47
webuser | 2021-02-28 07:42:10 | 2021-02-28 07:44:18.810066 | 28
webuser | 2021-02-28 07:42:15 | 2021-02-28 07:45:17.3888 | 40
webuser | 2021-02-28 07:42:20 | 2021-02-28 07:43:14.325476 | 15
webuser | 2021-02-28 07:42:25 | 2021-02-28 07:43:44.174841 | 21
webuser | 2021-02-28 07:42:30 | 2021-02-28 07:43:32.679052 | 18
webuser | 2021-02-28 07:42:35 | 2021-02-28 07:45:12.554117 | 38
webuser | 2021-02-28 07:42:40 | 2021-02-28 07:46:37.94311 | 55
webuser | 2021-02-28 07:42:45 | 2021-02-28 07:43:08.398444 | 13
(10 rows)
This work well on small data sets but for better performance, use PostgreSQL Range Types as below. This works on postgres 9.2 and later.
ALTER TABLE sessions ADD timerange tsrange;
UPDATE sessions SET timerange = tsrange(login_ts,logout_ts);
CREATE INDEX ON sessions USING gist (timerange);
CREATE TEMP TABLE level1 AS
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE s2.timerange && s1.timerange) AS parallel_sessions
FROM sessions s1;
SELECT date_trunc('hour',login_ts) AS hour, count(*),
max(parallel_sessions)
FROM level1
GROUP BY hour;
hour | count | max
---------------------+-------+-----
2021-02-28 14:00:00 | 720 | 98
2021-03-01 03:00:00 | 720 | 99
2021-03-01 06:00:00 | 720 | 94
2021-02-28 09:00:00 | 720 | 96
2021-02-28 10:00:00 | 720 | 97
2021-02-28 18:00:00 | 720 | 94
2021-02-28 11:00:00 | 720 | 97
2021-03-01 00:00:00 | 720 | 97
2021-02-28 19:00:00 | 720 | 99
2021-02-28 16:00:00 | 720 | 94
2021-02-28 17:00:00 | 720 | 95
2021-03-01 02:00:00 | 720 | 99
2021-02-28 08:00:00 | 720 | 96
2021-02-28 23:00:00 | 720 | 94
2021-03-01 07:00:00 | 505 | 92
2021-03-01 04:00:00 | 720 | 95
2021-02-28 21:00:00 | 720 | 97
2021-03-01 01:00:00 | 720 | 93
2021-02-28 22:00:00 | 720 | 96
2021-03-01 05:00:00 | 720 | 93
2021-02-28 20:00:00 | 720 | 95
2021-02-28 13:00:00 | 720 | 95
2021-02-28 12:00:00 | 720 | 97
2021-02-28 15:00:00 | 720 | 98
2021-02-28 07:00:00 | 216 | 93
(25 rows)

SQL query for setting column based on last seven entries

Problem
I am having trouble figuring out how to create a query that can tell if any userentry is preceded by 7 days without any activity (secondsPlayed == 0) and if so, then indicate it with the value of 1, otherwise 0.
This also means that if the user has less than 7 entries, the value will be 0 across all entries.
Input table:
+------------------------------+-------------------------+---------------+
| userid | estimationDate | secondsPlayed |
+------------------------------+-------------------------+---------------+
| a | 2016-07-14 00:00:00 UTC | 192.5 |
| a | 2016-07-15 00:00:00 UTC | 357.3 |
| a | 2016-07-16 00:00:00 UTC | 0 |
| a | 2016-07-17 00:00:00 UTC | 0 |
| a | 2016-07-18 00:00:00 UTC | 0 |
| a | 2016-07-19 00:00:00 UTC | 0 |
| a | 2016-07-20 00:00:00 UTC | 0 |
| a | 2016-07-21 00:00:00 UTC | 0 |
| a | 2016-07-22 00:00:00 UTC | 0 |
| a | 2016-07-23 00:00:00 UTC | 0 |
| a | 2016-07-24 00:00:00 UTC | 0 |
| ---------------------------- | ---------------------- | ---- |
| b | 2016-07-02 00:00:00 UTC | 31.2 |
| b | 2016-07-03 00:00:00 UTC | 42.1 |
| b | 2016-07-04 00:00:00 UTC | 41.9 |
| b | 2016-07-05 00:00:00 UTC | 43.2 |
| b | 2016-07-06 00:00:00 UTC | 91.5 |
| b | 2016-07-07 00:00:00 UTC | 0 |
| b | 2016-07-08 00:00:00 UTC | 0 |
| b | 2016-07-09 00:00:00 UTC | 239.1 |
| b | 2016-07-10 00:00:00 UTC | 0 |
+------------------------------+-------------------------+---------------+
The intended output table should look like this:
Output table:
+------------------------------+-------------------------+---------------+----------+
| userid | estimationDate | secondsPlayed | inactive |
+------------------------------+-------------------------+---------------+----------+
| a | 2016-07-14 00:00:00 UTC | 192.5 | 0 |
| a | 2016-07-15 00:00:00 UTC | 357.3 | 0 |
| a | 2016-07-16 00:00:00 UTC | 0 | 0 |
| a | 2016-07-17 00:00:00 UTC | 0 | 0 |
| a | 2016-07-18 00:00:00 UTC | 0 | 0 |
| a | 2016-07-19 00:00:00 UTC | 0 | 0 |
| a | 2016-07-20 00:00:00 UTC | 0 | 0 |
| a | 2016-07-21 00:00:00 UTC | 0 | 0 |
| a | 2016-07-22 00:00:00 UTC | 0 | 1 |
| a | 2016-07-23 00:00:00 UTC | 0 | 1 |
| a | 2016-07-24 00:00:00 UTC | 0 | 1 |
| ---------------------------- | ----------------------- | ----- | ----- |
| b | 2016-07-02 00:00:00 UTC | 31.2 | 0 |
| b | 2016-07-03 00:00:00 UTC | 42.1 | 0 |
| b | 2016-07-04 00:00:00 UTC | 41.9 | 0 |
| b | 2016-07-05 00:00:00 UTC | 43.2 | 0 |
| b | 2016-07-06 00:00:00 UTC | 91.5 | 0 |
| b | 2016-07-07 00:00:00 UTC | 0 | 0 |
| b | 2016-07-08 00:00:00 UTC | 0 | 0 |
| b | 2016-07-09 00:00:00 UTC | 239.1 | 0 |
| b | 2016-07-10 00:00:00 UTC | 0 | 0 |
+------------------------------+-------------------------+---------------+----------+
Thoughts
At first I was thinking about using the Lag function with a 7 offset, but this would obviously not relate to any of the subjects in between.
I was also thinking about creating a rolling window/average for a period of 7 days and evaluating if this is above 0. However this might be a bit above my skill level.
Any chance anyone has a good solution to this problem.
Assuming that you have data every day (which seems like a reasonable assumption), you can sum a window function:
select t.*,
(case when sum(secondsplayed) over (partition by userid
order by estimationdate
rows between 6 preceding and current row
) = 0 and
row_number() over (partition by userid order by estimationdate) >= 7
then 1
else 0
end) as inactive
from t;
In addition to no holes in the dates, this also assumes that secondsplayed is never negative. (Negative values can easily be incorporated into the logic, but that seems unnecessary.)
In my experience this type of input tables do not consist of inactivity entries and usually look like this (only activity entries are present here)
Input table:
+------------------------------+-------------------------+---------------+
| userid | estimationDate | secondsPlayed |
+------------------------------+-------------------------+---------------+
| a | 2016-07-14 00:00:00 UTC | 192.5 |
| a | 2016-07-15 00:00:00 UTC | 357.3 |
| ---------------------------- | ---------------------- | ---- |
| b | 2016-07-02 00:00:00 UTC | 31.2 |
| b | 2016-07-03 00:00:00 UTC | 42.1 |
| b | 2016-07-04 00:00:00 UTC | 41.9 |
| b | 2016-07-05 00:00:00 UTC | 43.2 |
| b | 2016-07-06 00:00:00 UTC | 91.5 |
| b | 2016-07-09 00:00:00 UTC | 239.1 |
+------------------------------+-------------------------+---------------+
So, below is for BigQuery Standard SQL and input as above
#standardSQL
WITH `project.dataset.table` AS (
SELECT 'a' userid, TIMESTAMP '2016-07-14 00:00:00 UTC' estimationDate, 192.5 secondsPlayed UNION ALL
SELECT 'a', '2016-07-15 00:00:00 UTC', 357.3 UNION ALL
SELECT 'b', '2016-07-02 00:00:00 UTC', 31.2 UNION ALL
SELECT 'b', '2016-07-03 00:00:00 UTC', 42.1 UNION ALL
SELECT 'b', '2016-07-04 00:00:00 UTC', 41.9 UNION ALL
SELECT 'b', '2016-07-05 00:00:00 UTC', 43.2 UNION ALL
SELECT 'b', '2016-07-06 00:00:00 UTC', 91.5 UNION ALL
SELECT 'b', '2016-07-09 00:00:00 UTC', 239.1
), time_frame AS (
SELECT day
FROM UNNEST(GENERATE_DATE_ARRAY('2016-07-02', '2016-07-24')) day
)
SELECT
users.userid,
day,
IFNULL(secondsPlayed, 0) secondsPlayed,
CAST(1 - SIGN(SUM(IFNULL(secondsPlayed, 0))
OVER(
PARTITION BY users.userid
ORDER BY UNIX_DATE(day)
RANGE BETWEEN 6 PRECEDING AND CURRENT ROW
)) AS INT64) AS inactive
FROM time_frame tf
CROSS JOIN (SELECT DISTINCT userid FROM `project.dataset.table`) users
LEFT JOIN `project.dataset.table` t
ON day = DATE(estimationDate) AND users.userid = t.userid
ORDER BY userid, day
with result
Row userid day secondsPlayed inactive
...
13 a 2016-07-14 192.5 0
14 a 2016-07-15 357.3 0
15 a 2016-07-15 357.3 0
16 a 2016-07-16 0.0 0
17 a 2016-07-17 0.0 0
18 a 2016-07-18 0.0 0
19 a 2016-07-19 0.0 0
20 a 2016-07-20 0.0 0
21 a 2016-07-21 0.0 0
22 a 2016-07-22 0.0 1
23 a 2016-07-23 0.0 1
24 a 2016-07-24 0.0 1
25 b 2016-07-02 31.2 0
26 b 2016-07-03 42.1 0
27 b 2016-07-04 41.9 0
28 b 2016-07-05 43.2 0
29 b 2016-07-06 91.5 0
30 b 2016-07-07 0.0 0
31 b 2016-07-08 0.0 0
32 b 2016-07-09 239.1 0
33 b 2016-07-10 0.0 0
...

hql split time into intervals

I have a Hive table with some data and i would like to split it in to 15 minutes intervals et return the total call duration for every interval
Hive Table example :
ID Start End Total Duration
1 1502296261 1502325061 28800
My output should be shown as :
ID Interval Duration
1 2017-08-09 18:30:00 839
1 2017-08-09 18:45:00 900
1 2017-08-09 19:00:00 900
...
1 2017-08-10 02:15:00 900
1 2017-08-10 02:30:00 61
What is the best solution to do that in a efficient way ?
Thanks.
This is the basic solution.
The displayed timestamp (Interval) depends on your system timezone.
with t as (select stack(1,1,1502296261,1502325061) as (`ID`,`Start`,`End`))
select t.`ID` as `ID`
,from_unixtime((t.`Start` div (15*60) + pe.pos)*(15*60)) as `Interval`
, case
when pe.pos = t.`End` div (15*60) - t.`Start` div (15*60)
then t.`End`
else (t.`Start` div (15*60) + pe.pos + 1)*(15*60)
end
- case
when pe.pos = 0
then t.`Start`
else (t.`Start` div (15*60) + pe.pos)*(15*60)
end as `Duration`
from t
lateral view
posexplode(split(space(int(t.`End` div (15*60) - t.`Start` div (15*60))),' ')) pe
;
+----+---------------------+----------+
| id | interval | duration |
+----+---------------------+----------+
| 1 | 2017-08-09 09:30:00 | 839 |
| 1 | 2017-08-09 09:45:00 | 900 |
| 1 | 2017-08-09 10:00:00 | 900 |
| 1 | 2017-08-09 10:15:00 | 900 |
| 1 | 2017-08-09 10:30:00 | 900 |
| 1 | 2017-08-09 10:45:00 | 900 |
| 1 | 2017-08-09 11:00:00 | 900 |
| 1 | 2017-08-09 11:15:00 | 900 |
| 1 | 2017-08-09 11:30:00 | 900 |
| 1 | 2017-08-09 11:45:00 | 900 |
| 1 | 2017-08-09 12:00:00 | 900 |
| 1 | 2017-08-09 12:15:00 | 900 |
| 1 | 2017-08-09 12:30:00 | 900 |
| 1 | 2017-08-09 12:45:00 | 900 |
| 1 | 2017-08-09 13:00:00 | 900 |
| 1 | 2017-08-09 13:15:00 | 900 |
| 1 | 2017-08-09 13:30:00 | 900 |
| 1 | 2017-08-09 13:45:00 | 900 |
| 1 | 2017-08-09 14:00:00 | 900 |
| 1 | 2017-08-09 14:15:00 | 900 |
| 1 | 2017-08-09 14:30:00 | 900 |
| 1 | 2017-08-09 14:45:00 | 900 |
| 1 | 2017-08-09 15:00:00 | 900 |
| 1 | 2017-08-09 15:15:00 | 900 |
| 1 | 2017-08-09 15:30:00 | 900 |
| 1 | 2017-08-09 15:45:00 | 900 |
| 1 | 2017-08-09 16:00:00 | 900 |
| 1 | 2017-08-09 16:15:00 | 900 |
| 1 | 2017-08-09 16:30:00 | 900 |
| 1 | 2017-08-09 16:45:00 | 900 |
| 1 | 2017-08-09 17:00:00 | 900 |
| 1 | 2017-08-09 17:15:00 | 900 |
| 1 | 2017-08-09 17:30:00 | 61 |
+----+---------------------+----------+

Postgres, Update TIMESTAMP to current date but preserve time of day

In my Postgres database, I have the following table:
SELECT start_at, end_at FROM schedules;
+---------------------+---------------------+
| start_at | end_at |
|---------------------+---------------------|
| 2016-09-05 16:30:00 | 2016-09-05 17:30:00 |
| 2016-09-05 17:30:00 | 2016-09-05 18:30:00 |
| 2017-08-13 03:00:00 | 2017-08-13 07:00:00 |
| 2017-08-13 03:00:00 | 2017-08-13 07:00:00 |
| 2017-08-13 18:42:26 | 2017-08-13 21:30:46 |
| 2017-08-10 00:00:00 | 2017-08-10 03:30:00 |
| 2017-08-09 18:00:00 | 2017-08-10 03:00:00 |
| 2017-08-06 23:00:00 | 2017-08-07 03:00:00 |
| 2017-08-07 01:00:00 | 2017-08-07 03:48:20 |
| 2017-08-07 01:00:00 | 2017-08-07 03:48:20 |
| 2017-08-07 18:05:00 | 2017-08-07 20:53:20 |
| 2017-08-07 14:00:00 | 2017-08-08 01:00:00 |
| 2017-08-07 18:00:00 | 2017-08-07 20:48:20 |
| 2017-08-08 08:00:00 | 2017-08-09 00:00:00 |
| 2017-08-09 21:30:00 | 2017-08-10 00:18:20 |
| 2017-08-13 03:53:26 | 2017-08-13 06:41:46 |
+---------------------+---------------------+
Assume I also have an ID column, what I want to do is update all the start and end times to be for today (now), what is the most efficient SQL to accomplish this? My table could have millions of rows.
the best I can think of is this:
update schedules
set start_at = current_date + start_at::time
, end_at = current_date + end_at::time
WHERE start_at::date <> current_date
or end_at::date <> current_date;
The arithmetic is fast compared to accessing the rows.
if not all rows need updating, the where clause will help efficiency. Updates are expensive.

Oracle: Select parallel entries

I am searching the most efficient way to make a relatively complicated query in a relatively large table.
The concept is that:
I have a table that holds records of phases that can run parallel to each other
The amount of records exceeds the 5 millions (and increases)
The time period starts about 5 years ago
Due to performance reasons, this select could be applied on the last 3 months period of time with 300.000 records (only if it is not physically possible to do it for the whole table)
Oracle version: 11g
The data sample seems as following
Table Phases (ID, START_TS, END_TS, PRIO)
1 10:00:00 10:20:10 10
2 10:05:00 10:10:00 11
3 10:05:20 10:15:00 9
4 10:16:00 10:25:00 8
5 10:24:00 10:45:15 1
6 10:26:00 10:30:00 10
7 10:27:00 10:35:00 15
8 10:34:00 10:50:00 5
9 10:50:00 10:55:00 20
10 10:55:00 11:00:00 15
Above you can see how the information is currently stored (of course there are several other columns with irrelevant information).
There are two requirements (or problems to be solved)
If we sum the duration of all the phases, the result is MUCH more than an hour that the above data represent. (There could be holes between the phases, so taking the first start_ts and the last end_ts would not be sufficient).
The data should be displayed in a form that it would be visible which phases run parallel with which and which phase had the highest priority at each time, as shown in the expected view below
Here it is easy to distinct the highest priority phase at each time (HIGHEST_PRIO), and adding their duration would result the actual total duration.
View V_Parallel_Phases (ID, START_TS, END_TS, PRIO, HIGHEST_PRIO)
-> Optional Columns: Part_of_ID / Runs_Parallel
1 10:00:00 10:05:20 10 True (--> Part_1 / False)
1 10:05:20 10:15:00 10 False (--> Part_2 / True)
2 10:05:00 10:10:00 11 False (--> Part_1 / True)
3 10:05:20 10:15:00 9 True (--> Part_1 / True)
1 10:15:00 10:16:00 10 True (--> Part_3 / True)
1 10:16:00 10:20:10 10 False (--> Part_4 / True)
4 10:16:00 10:24:00 8 True (--> Part_1 / True)
4 10:24:00 10:25:00 8 False (--> Part_2 / True)
5 10:24:00 10:45:15 1 True (--> Part_1 / True)
6 10:26:00 10:30:00 10 False (--> Part_1 / True)
7 10:27:00 10:35:00 15 False (--> Part_1 / True)
8 10:34:00 10:45:15 5 False (--> Part_1 / True)
8 10:45:15 10:50:00 5 True (--> Part_2 / True)
9 10:50:00 10:55:00 20 True (--> Part_2 / False)
10 10:55:00 11:00:00 15 True (--> Part_2 / False)
Unfortunately I am not aware of an efficient way to make this query. The current solution was to make the above calculations programmatically in the tool that generates a large report but it was a total failure. From the 30 seconds that were needed before this calculations, now it needs over 10 minutes without taking event into consideration the priorities of the phases..
Then I thought of translating this code into sql in either: a) a view b) a materialized view c) a table that I would fill with a procedure once in a while (depending on the required duration).
PS: I am aware that oracle has some analytical functions that can handle complicated queries but I am not aware of which could actually help me in the current problem.
Thank you in advance!
This is an incomplete answer, but I need to know if this approach is viable before going on. I believe it is possible to do completely in SQL, but I am not sure how the performance will be.
First find out all points in time where there is a transition:
CREATE VIEW Events AS
SELECT START_TS AS TS
FROM Phases
UNION
SELECT END_TS AS TS
FROM Phases
;
Then create (start, end) tuples from those points in time:
CREATE VIEW Segments AS
SELECT START.TS AS START_TS,
MIN(END.TS) AS END_TS
FROM Events AS START
JOIN Events AS END
WHERE START.TS < END.TS
;
From here on, doing the rest should be fairly straight forward. Here is a query that lists the segments and all the phases that are active in the given segment:
SELECT *
FROM Segments
JOIN Phases
WHERE Segments.START_TS BETWEEN Phases.START_TS AND Phases.END_TS
AND Segments.END_TS BETWEEN Phases.START_TS AND Phases.END_TS
ORDER BY Segments.START_TS
;
The rest can be done with subselects and some aggregates.
| START_TS | END_TS | ID | START_TS | END_TS | PRIO |
|----------|----------|----|----------|----------|------|
| 10:00:00 | 10:05:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:00 | 10:05:20 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:00 | 10:05:20 | 2 | 10:05:00 | 10:10:00 | 11 |
| 10:05:20 | 10:10:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:20 | 10:10:00 | 2 | 10:05:00 | 10:10:00 | 11 |
| 10:05:20 | 10:10:00 | 3 | 10:05:20 | 10:15:00 | 9 |
| 10:10:00 | 10:15:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:10:00 | 10:15:00 | 3 | 10:05:20 | 10:15:00 | 9 |
| 10:15:00 | 10:16:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:16:00 | 10:20:10 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:16:00 | 10:20:10 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:20:10 | 10:24:00 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:24:00 | 10:25:00 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:24:00 | 10:25:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:25:00 | 10:26:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:26:00 | 10:27:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:26:00 | 10:27:00 | 6 | 10:26:00 | 10:30:00 | 10 |
| 10:27:00 | 10:30:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:27:00 | 10:30:00 | 6 | 10:26:00 | 10:30:00 | 10 |
| 10:27:00 | 10:30:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:30:00 | 10:34:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:30:00 | 10:34:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:34:00 | 10:35:00 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:34:00 | 10:35:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:34:00 | 10:35:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:35:00 | 10:45:15 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:35:00 | 10:45:15 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:45:15 | 10:50:00 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:50:00 | 10:55:00 | 9 | 10:50:00 | 10:55:00 | 20 |
| 10:55:00 | 11:00:00 | 10 | 10:55:00 | 11:00:00 | 15 |
There is a SQL fiddle demonstrating the whole thing here:
http://sqlfiddle.com/#!9/d801b/2