SQL to show overlapping time periods - sql

How to check in Postgresql 9.2 (SQL command), if in the timestamp records there is some period overlapping others from same id_user. I need to correct an existing table.
For example, a query show the rows 1,3 and 4.
id | id_user | timedate0 | timedate2
---------------------------------------------------
1 | 1 | 2020-04-20 12:00:00 | 2020-04-20 14:00:00
2 | 1 | 2020-04-20 17:00:00 | 2020-04-20 19:30:00
3 | 1 | 2020-04-20 14:30:00 | 2020-04-20 15:40:00
4 | 1 | 2020-04-20 13:00:00 | 2020-04-20 15:00:00
5 | 1 | 2020-04-21 13:00:00 | 2020-04-21 14:00:00
6 | 1 | 2020-04-21 14:00:00 | 2020-04-21 15:00:00

You can use exists:
select t.*
from t
where exists (select 1
from t t2
where t2.timedate0 < t.timedate2 and
t2.timedate2 > t.timedate0 and
t2.id_user = t.id_user and t2.id <> t.id
);

Related

PostgreSQL count max number of concurrent user sessions per hour

Situation
We have a PostgreSQL 9.1 database containing user sessions with login date/time and logout date/time per row. Table looks like this:
user_id | login_ts | logout_ts
------------+--------------+--------------------------------
USER1 | 2021-02-03 09:23:00 | 2021-02-03 11:44:00
USER2 | 2021-02-03 10:49:00 | 2021-02-03 13:30:00
USER3 | 2021-02-03 13:32:00 | 2021-02-03 15:31:00
USER4 | 2021-02-04 13:50:00 | 2021-02-04 14:53:00
USER5 | 2021-02-04 14:44:00 | 2021-02-04 15:21:00
USER6 | 2021-02-04 14:52:00 | 2021-02-04 17:59:00
Goal
Would like to get the max number of concurrent users for each 24 hours of each day in the time range. Like this:
date | hour | sessions
-----------+-------+-----------
2021-02-03 | 01:00 | 0
2021-02-03 | 02:00 | 0
2021-02-03 | 03:00 | 0
2021-02-03 | 04:00 | 0
2021-02-03 | 05:00 | 0
2021-02-03 | 06:00 | 0
2021-02-03 | 07:00 | 0
2021-02-03 | 08:00 | 0
2021-02-03 | 09:00 | 1
2021-02-03 | 10:00 | 2
2021-02-03 | 11:00 | 2
2021-02-03 | 12:00 | 1
2021-02-03 | 13:00 | 1
2021-02-03 | 14:00 | 1
2021-02-03 | 15:00 | 0
2021-02-03 | 16:00 | 0
2021-02-03 | 17:00 | 0
2021-02-03 | 18:00 | 0
2021-02-03 | 19:00 | 0
2021-02-03 | 20:00 | 0
2021-02-03 | 21:00 | 0
2021-02-03 | 22:00 | 0
2021-02-03 | 23:00 | 0
2021-02-03 | 24:00 | 0
2021-02-04 | 01:00 | 0
2021-02-04 | 02:00 | 0
2021-02-04 | 03:00 | 0
2021-02-04 | 04:00 | 0
2021-02-04 | 05:00 | 0
2021-02-04 | 06:00 | 0
2021-02-04 | 07:00 | 0
2021-02-04 | 08:00 | 0
2021-02-04 | 09:00 | 0
2021-02-04 | 10:00 | 0
2021-02-04 | 11:00 | 0
2021-02-04 | 12:00 | 0
2021-02-04 | 13:00 | 1
2021-02-04 | 14:00 | 3
2021-02-04 | 15:00 | 1
2021-02-04 | 16:00 | 1
2021-02-04 | 17:00 | 1
2021-02-04 | 18:00 | 0
2021-02-04 | 19:00 | 0
2021-02-04 | 20:00 | 0
2021-02-04 | 21:00 | 0
2021-02-04 | 22:00 | 0
2021-02-04 | 23:00 | 0
2021-02-04 | 24:00 | 0
Considerations
"Concurrent" means at the same point in time. Thus user2 and user3 do not overlap for
13:00, but user4 and user6 do overlap for 14:00 even though they only overlap for 1 minute.
User sessions can span multiple hours and would thus count for each hour they are part of.
Each user can only be online once at one point in time.
If there are no users for a particular hour, this should return 0.
Similar questions
A similar question was answered here: Count max. number of concurrent user sessions per day by Erwin Brandstetter. However, this is per day rather than per hour, and I am apparently too much of a noob at postgreSQL to be able to translate it into hourly so I'm hoping someone can help.
I would decompose this into two problems:
Find the number of overlaps and when they begin and end.
Find the hours.
Note two things:
I am assuming that '2014-04-03 17:59:00' is a typo.
The following goes by the beginning of the hour and puts the date/hour in a single column.
First, calculate the overlaps. For this, unpivot the logins and logout. Put in a counter of +1 for logins and -1 for logouts and do a cumulative sum. This looks like:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select *
from overlap
order by ts;
For the next step, use generate_series() to generate timestamps one hour apart. Look for the maximum value during that period using left join and group by:
with overlap as (
select v.ts, sum(v.inc) as inc,
sum(sum(v.inc)) over (order by v.ts) as num_overlaps,
lead(v.ts) over (order by v.ts) as next_ts
from sessions s cross join lateral
(values (login_ts, 1), (logout_ts, -1)) v(ts, inc)
group by v.ts
)
select gs.hh, coalesce(max(o.num_overlaps), 0) as num_overlaps
from generate_series('2021-02-03'::date, '2021-02-05'::date, interval '1 hour') gs(hh) left join
overlap o
on o.ts < gs.hh + interval '1 hour' and
o.next_ts > gs.hh
group by gs.hh
order by gs.hh;
Here is a db<>fiddle using your data fixed with the a reasonable logout time for the last record.
For any time period you can calculate number of concurrent sesions using OVERLAPS operator in SQL:
CREATE TEMP TABLE sessions (
user_id text not null,
login_ts timestamp,
logout_ts timestamp );
INSERT INTO sessions SELECT 'webuser', d,
d+((1+random()*300)::text||' seconds')::interval
FROM generate_series(
'2021-02-28 07:42'::timestamp,
'2021-03-01 07:42'::timestamp,
'5 seconds'::interval) AS d;
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE (s2.login_ts, s2.logout_ts) OVERLAPS (s1.login_ts, s1.logout_ts))
AS parallel_sessions
FROM sessions s1 LIMIT 10;
user_id | login_ts | logout_ts | parallel_sessions
---------+---------------------+----------------------------+------------------
webuser | 2021-02-28 07:42:00 | 2021-02-28 07:42:25.528594 | 6
webuser | 2021-02-28 07:42:05 | 2021-02-28 07:45:50.513769 | 47
webuser | 2021-02-28 07:42:10 | 2021-02-28 07:44:18.810066 | 28
webuser | 2021-02-28 07:42:15 | 2021-02-28 07:45:17.3888 | 40
webuser | 2021-02-28 07:42:20 | 2021-02-28 07:43:14.325476 | 15
webuser | 2021-02-28 07:42:25 | 2021-02-28 07:43:44.174841 | 21
webuser | 2021-02-28 07:42:30 | 2021-02-28 07:43:32.679052 | 18
webuser | 2021-02-28 07:42:35 | 2021-02-28 07:45:12.554117 | 38
webuser | 2021-02-28 07:42:40 | 2021-02-28 07:46:37.94311 | 55
webuser | 2021-02-28 07:42:45 | 2021-02-28 07:43:08.398444 | 13
(10 rows)
This work well on small data sets but for better performance, use PostgreSQL Range Types as below. This works on postgres 9.2 and later.
ALTER TABLE sessions ADD timerange tsrange;
UPDATE sessions SET timerange = tsrange(login_ts,logout_ts);
CREATE INDEX ON sessions USING gist (timerange);
CREATE TEMP TABLE level1 AS
SELECT s1.user_id, s1.login_ts, s1.logout_ts,
(select count(*) FROM sessions s2
WHERE s2.timerange && s1.timerange) AS parallel_sessions
FROM sessions s1;
SELECT date_trunc('hour',login_ts) AS hour, count(*),
max(parallel_sessions)
FROM level1
GROUP BY hour;
hour | count | max
---------------------+-------+-----
2021-02-28 14:00:00 | 720 | 98
2021-03-01 03:00:00 | 720 | 99
2021-03-01 06:00:00 | 720 | 94
2021-02-28 09:00:00 | 720 | 96
2021-02-28 10:00:00 | 720 | 97
2021-02-28 18:00:00 | 720 | 94
2021-02-28 11:00:00 | 720 | 97
2021-03-01 00:00:00 | 720 | 97
2021-02-28 19:00:00 | 720 | 99
2021-02-28 16:00:00 | 720 | 94
2021-02-28 17:00:00 | 720 | 95
2021-03-01 02:00:00 | 720 | 99
2021-02-28 08:00:00 | 720 | 96
2021-02-28 23:00:00 | 720 | 94
2021-03-01 07:00:00 | 505 | 92
2021-03-01 04:00:00 | 720 | 95
2021-02-28 21:00:00 | 720 | 97
2021-03-01 01:00:00 | 720 | 93
2021-02-28 22:00:00 | 720 | 96
2021-03-01 05:00:00 | 720 | 93
2021-02-28 20:00:00 | 720 | 95
2021-02-28 13:00:00 | 720 | 95
2021-02-28 12:00:00 | 720 | 97
2021-02-28 15:00:00 | 720 | 98
2021-02-28 07:00:00 | 216 | 93
(25 rows)

Oracle query - select date and time - Overlapping

oralce query Date and time with overlapping
ID startdate enddate hours
a124 10/10/2019 07:30:00 10/10/2019 11:30:00 4
a124 10/10/2019 07:00:00 10/10/2019 15:10:00 8.17
bc24 10/10/2019 07:30:00 10/10/2019 11:30:00 4
bc24 10/10/2019 10:30:00 10/10/2019 15:30:00 5
er67 10/10/2019 09:30:00 10/10/2019 11:30:00 2
er67 10/10/2019 15:30:00 10/10/2019 16:30:00 1
Desired Output :
ID startdate enddate hours
a124 10/10/2019 07:00:00 10/10/2019 15:10:00 8.17
bc24 10/10/2019 07:30:00 10/10/2019 15:30:00 8
er67 10/10/2019 09:30:00 10/10/2019 11:30:00 2
er67 10/10/2019 15:30:00 10/10/2019 16:30:00 1
I would approach this using lag(), a cumulative sum() and aggregation. Here is a step by step explanation.
First, you can use lag() to recover the previous start and end date for the same id:
select
t.*,
lag(startdate) over(partition by id order by startdate) lagstartdate,
lag(enddate) over(partition by id order by startdate) lagenddate
from mytable t
ID | STARTDATE | ENDDATE | HOURS | LAGSTARTDATE | LAGENDDATE
:--- | :------------------ | :------------------ | ----: | :------------------ | :------------------
a124 | 2019-10-10 07:00:00 | 2019-10-10 15:10:00 | 8.17 | null | null
a124 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00 | 4 | 2019-10-10 07:00:00 | 2019-10-10 15:10:00
bc24 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00 | 4 | null | null
bc24 | 2019-10-10 10:30:00 | 2019-10-10 15:30:00 | 5 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00
er67 | 2019-10-10 09:30:00 | 2019-10-10 11:30:00 | 2 | null | null
er67 | 2019-10-10 15:30:00 | 2019-10-10 16:30:00 | 1 | 2019-10-10 09:30:00 | 2019-10-10 11:30:00
Then, you can set up the cumulative sum to slit records having the same id within groups (that will later on be aggregated). When the dates do not overlap, then a new group starts:
select
t.*,
sum(
case when startdate <= lagenddate or enddate <= lagstartdate
then 0
else 1
end
) over(partition by id order by startdate) grp
from (
select
t.*,
lag(startdate) over(partition by id order by startdate) lagstartdate,
lag(enddate) over(partition by id order by startdate) lagenddate
from mytable t
) t
ID | STARTDATE | ENDDATE | HOURS | LAGSTARTDATE | LAGENDDATE | GRP
:--- | :------------------ | :------------------ | ----: | :------------------ | :------------------ | --:
a124 | 2019-10-10 07:00:00 | 2019-10-10 15:10:00 | 8.17 | null | null | 1
a124 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00 | 4 | 2019-10-10 07:00:00 | 2019-10-10 15:10:00 | 1
bc24 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00 | 4 | null | null | 1
bc24 | 2019-10-10 10:30:00 | 2019-10-10 15:30:00 | 5 | 2019-10-10 07:30:00 | 2019-10-10 11:30:00 | 1
er67 | 2019-10-10 09:30:00 | 2019-10-10 11:30:00 | 2 | null | null | 1
er67 | 2019-10-10 15:30:00 | 2019-10-10 16:30:00 | 1 | 2019-10-10 09:30:00 | 2019-10-10 11:30:00 | 2
Finally, you can group the records by id and grp: min() and max() give you the date range, then you can compute the date difference.
Final query:
select
id,
min(startdate) startdate,
max(enddate) enddate,
round((max(enddate) - min(startdate)) * 24, 2) hours
from (
select
t.*,
sum(
case when startdate <= lagenddate or enddate <= lagstartdate
then 0
else 1
end
) over(partition by id order by startdate) grp
from (
select
t.*,
lag(startdate) over(partition by id order by startdate) lagstartdate,
lag(enddate) over(partition by id order by startdate) lagenddate
from mytable t
) t
) t
group by id, grp
order by id, grp
ID | STARTDATE | ENDDATE | HOURS
:--- | :------------------ | :------------------ | ----:
a124 | 2019-10-10 07:00:00 | 2019-10-10 15:10:00 | 8.17
bc24 | 2019-10-10 07:30:00 | 2019-10-10 15:30:00 | 8
er67 | 2019-10-10 09:30:00 | 2019-10-10 11:30:00 | 2
er67 | 2019-10-10 15:30:00 | 2019-10-10 16:30:00 | 1
Demo on DB Fiddle
You can use analytical functions (LAG and SUM) as following to make groups of dates without gaps:
SQL> SELECT
2 ID,
3 MIN(STARTDATE),
4 MAX(ENDDATE),
5 ROUND(SUM(CASE
6 WHEN PREV_ENDDATE BETWEEN STARTDATE AND ENDDATE THEN ENDDATE - PREV_ENDDATE
7 ELSE ENDDATE - STARTDATE
8 END) * 24, 2) AS HOURS
9 FROM
10 (
11 SELECT
12 T.*,
13 SUM(CASE
14 WHEN T.PREV_ENDDATE < T.STARTDATE THEN 1
15 END) OVER(
16 PARTITION BY ID
17 ORDER BY
18 STARTDATE, ENDDATE
19 ) SM
20 FROM
21 (
22 SELECT
23 ID,
24 STARTDATE,
25 ENDDATE,
26 LAG(ENDDATE) OVER(
27 PARTITION BY ID
28 ORDER BY
29 STARTDATE, ENDDATE
30 ) AS PREV_ENDDATE
31 FROM
32 T TOUT
33 WHERE
34 NOT EXISTS (
35 SELECT
36 1
37 FROM
38 T TIN
39 WHERE
40 TIN.ID = TOUT.ID
41 AND TOUT.STARTDATE BETWEEN TIN.STARTDATE AND TIN.ENDDATE
42 AND TOUT.ENDDATE BETWEEN TIN.STARTDATE AND TIN.ENDDATE
43 AND TOUT.ROWID <> TIN.ROWID
44 )
45 ) T
46 )
47 GROUP BY
48 ID,
49 SM;
ID MIN(START MAX(ENDDA HOURS
---- --------- --------- ----------
a124 10-OCT-19 10-OCT-19 8.17
bc24 10-OCT-19 10-OCT-19 8
er67 10-OCT-19 10-OCT-19 1
er67 10-OCT-19 10-OCT-19 2
SQL>
Cheers!!

How to calcute future datetime after few workinghours

I am trying to calculate what date will be after 2 or more working hours from now even if I'll start calculating on weekend or after workhours it should be like:
working hours are from 8am to 4pm
I start calculating at Friday at 3pm so if I'll start calculating result should be Monday 9am
if(#data_przyj>#WorkStart AND DATEPART(DATEADD(MINUTE,#ileNaZapytanie,#data_przyj)<#WorkFinish)
BEGIN
while (DATEPART(dw, #CurrentDate)!=1 AND DATEPART(dw, #CurrentDate)!=7))
BEGIN
SET #CurrentDate = DATEADD(day, 1, #CurrentDate)
SET #czyBylPrzeskok =1
END
if (#czyBylPrzeskok =1)
BEGIN
SET #LastDay = #CurrentDate
SET #LastDay = DATEADD(MINUTE, datediff(MINUTE,DATEADD(dd, 0, DATEDIFF(MINUTE, 0, #data_przyj)),#WorkStart), #LastDay)
SET #LastDay = DATEADD(HOUR, datediff(MINUTE,DATEADD(dd, 0, DATEDIFF(HOUR, 0, #data_przyj)),#WorkStart), #LastDay)
END
ELSE
BEGIN
SET #LastDay = DATEADD(MINUTE,#ileNaZapytanie,#data_przyj)
END
SET #IsCalculated = 1
END
else if(#data_przyj>#WorkStart AND DATEADD(MINUTE,#ileNaZapytanie,#data_przyj)>#WorkFinish)
BEGIN
SET #LastDay =DateADD(DD,3,GETDATE());
SET #IsCalculated = 1
END
else if(#data_przyj<#WorkStart )
BEGIN
SET #LastDay =GETDATE();
SET #IsCalculated = 1
END
END
EDIT:
for example working hours:8:00 - 16:00 i have Date '2019-09-06 15:00' so after adding 2 working hours should be '2019-09-09 09:00', for date '2019-09-06 13:00' should be '2019-09-06 15:00' etc
The following solution uses a calendar table with working hours, then use a rolling sum to accumulate each day's business hours and find which day you need to end with.
Using a calendar table will give you the flexibility of having different business time periods and very easily adding or removing holidays.
Setup (calendar table):
IF OBJECT_ID('tempdb..#WorkingCalendar') IS NOT NULL
DROP TABLE #WorkingCalendar
CREATE TABLE #WorkingCalendar (
Date DATE PRIMARY KEY,
IsWorkingDay BIT,
WorkingStartTime DATETIME,
WorkingEndTime DATETIME)
SET DATEFIRST 1 -- 1: Monday, 7: Sunday
DECLARE #StartDate DATE = '2019-01-01'
DECLARE #EndDate DATE = '2030-01-01'
;WITH RecursiveDates AS
(
SELECT
GeneratedDate = #StartDate
UNION ALL
SELECT
GeneratedDate = DATEADD(DAY, 1, R.GeneratedDate)
FROM
RecursiveDates AS R
WHERE
R.GeneratedDate < #EndDate
)
INSERT INTO #WorkingCalendar (
Date,
IsWorkingDay,
WorkingStartTime,
WorkingEndTime)
SELECT
Date = R.GeneratedDate,
IsWorkingDay = CASE
WHEN DATEPART(WEEKDAY, R.GeneratedDate) BETWEEN 1 AND 5 THEN 1 -- From Monday to Friday
ELSE 0 END,
WorkingStartTime = CASE
WHEN DATEPART(WEEKDAY, R.GeneratedDate) BETWEEN 1 AND 5
THEN CONVERT(DATETIME, R.GeneratedDate) + CONVERT(DATETIME, '08:00:00') END,
WorkingEndTime = CASE
WHEN DATEPART(WEEKDAY, R.GeneratedDate) BETWEEN 1 AND 5
THEN CONVERT(DATETIME, R.GeneratedDate) + CONVERT(DATETIME, '16:00:00') END
FROM
RecursiveDates AS R
OPTION
(MAXRECURSION 0)
Generates a table like the following:
+------------+--------------+-------------------------+-------------------------+
| Date | IsWorkingDay | WorkingStartTime | WorkingEndTime |
+------------+--------------+-------------------------+-------------------------+
| 2019-01-01 | 1 | 2019-01-01 08:00:00.000 | 2019-01-01 16:00:00.000 |
| 2019-01-02 | 1 | 2019-01-02 08:00:00.000 | 2019-01-02 16:00:00.000 |
| 2019-01-03 | 1 | 2019-01-03 08:00:00.000 | 2019-01-03 16:00:00.000 |
| 2019-01-04 | 1 | 2019-01-04 08:00:00.000 | 2019-01-04 16:00:00.000 |
| 2019-01-05 | 0 | NULL | NULL |
| 2019-01-06 | 0 | NULL | NULL |
| 2019-01-07 | 1 | 2019-01-07 08:00:00.000 | 2019-01-07 16:00:00.000 |
| 2019-01-08 | 1 | 2019-01-08 08:00:00.000 | 2019-01-08 16:00:00.000 |
| 2019-01-09 | 1 | 2019-01-09 08:00:00.000 | 2019-01-09 16:00:00.000 |
| 2019-01-10 | 1 | 2019-01-10 08:00:00.000 | 2019-01-10 16:00:00.000 |
| 2019-01-11 | 1 | 2019-01-11 08:00:00.000 | 2019-01-11 16:00:00.000 |
| 2019-01-12 | 0 | NULL | NULL |
| 2019-01-13 | 0 | NULL | NULL |
| 2019-01-14 | 1 | 2019-01-14 08:00:00.000 | 2019-01-14 16:00:00.000 |
| 2019-01-15 | 1 | 2019-01-15 08:00:00.000 | 2019-01-15 16:00:00.000 |
| 2019-01-16 | 1 | 2019-01-16 08:00:00.000 | 2019-01-16 16:00:00.000 |
| 2019-01-17 | 1 | 2019-01-17 08:00:00.000 | 2019-01-17 16:00:00.000 |
+------------+--------------+-------------------------+-------------------------+
Proposed Solution:
DECLARE #v_BusinessHoursToAdd INT = 2
DECLARE #v_CurrentDateTimeHour DATETIME = '2019-09-06 15:00'
;WITH CalendarFromNow AS
(
SELECT
T.Date,
WorkingStartTime = CASE
WHEN #v_CurrentDateTimeHour BETWEEN T.WorkingStartTime AND T.WorkingEndTime THEN #v_CurrentDateTimeHour
ELSE T.WorkingStartTime END,
WorkingEndTime = T.WorkingEndTime
FROM
#WorkingCalendar AS T
WHERE
T.Date >= CONVERT(DATE, #v_CurrentDateTimeHour) AND
T.IsWorkingDay = 1
),
RollingBusinessSum AS
(
SELECT
C.Date,
C.WorkingStartTime,
C.WorkingEndTime,
AmountBusinessHours = DATEDIFF(HOUR, C.WorkingStartTime, C.WorkingEndTime),
RollingBusinessHoursSum = SUM(DATEDIFF(HOUR, C.WorkingStartTime, C.WorkingEndTime)) OVER (ORDER BY C.Date),
PendingHours = #v_BusinessHoursToAdd - SUM(DATEDIFF(HOUR, C.WorkingStartTime, C.WorkingEndTime)) OVER (ORDER BY C.Date)
FROM
CalendarFromNow AS C
)
SELECT TOP 1
EndingHour = DATEADD(
HOUR,
R.PendingHours,
R.WorkingEndTime)
FROM
RollingBusinessSum AS R
WHERE
R.PendingHours < 0
ORDER BY
R.Date
Explanation:
The first CTE CalendarFromNow is simply filtering the calendar dates from the current hour's date and reducing the starting working datetime to the current hour, since this is gonna be the starting point to count hours from.
+------------+-------------------------+-------------------------+
| Date | WorkingStartTime | WorkingEndTime |
+------------+-------------------------+-------------------------+
| 2019-09-06 | 2019-09-06 15:00:00.000 | 2019-09-06 16:00:00.000 |
| 2019-09-09 | 2019-09-09 08:00:00.000 | 2019-09-09 16:00:00.000 |
| 2019-09-10 | 2019-09-10 08:00:00.000 | 2019-09-10 16:00:00.000 |
| 2019-09-11 | 2019-09-11 08:00:00.000 | 2019-09-11 16:00:00.000 |
| 2019-09-12 | 2019-09-12 08:00:00.000 | 2019-09-12 16:00:00.000 |
| 2019-09-13 | 2019-09-13 08:00:00.000 | 2019-09-13 16:00:00.000 |
| 2019-09-16 | 2019-09-16 08:00:00.000 | 2019-09-16 16:00:00.000 |
+------------+-------------------------+-------------------------+
The second CTE RollingBusinessSum is calculating the amount of business hours on each day and accumulating them over the days. The last column PendingHours is the result of the amount of hours we need to add from now subtracted by the sum of business hours over the days.
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+
| Date | WorkingStartTime | WorkingEndTime | AmountBusinessHours | RollingBusinessHoursSum | PendingHours |
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+
| 2019-09-06 | 2019-09-06 15:00:00.000 | 2019-09-06 16:00:00.000 | 1 | 1 | 1 |
| 2019-09-09 | 2019-09-09 08:00:00.000 | 2019-09-09 16:00:00.000 | 8 | 9 | -7 |
| 2019-09-10 | 2019-09-10 08:00:00.000 | 2019-09-10 16:00:00.000 | 8 | 17 | -15 |
| 2019-09-11 | 2019-09-11 08:00:00.000 | 2019-09-11 16:00:00.000 | 8 | 25 | -23 |
| 2019-09-12 | 2019-09-12 08:00:00.000 | 2019-09-12 16:00:00.000 | 8 | 33 | -31 |
| 2019-09-13 | 2019-09-13 08:00:00.000 | 2019-09-13 16:00:00.000 | 8 | 41 | -39 |
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+
Finally the first day that the PendingHours column is negative is the day that we arrived at the amount of hours we wanted to add. This is the TOP 1 with ORDER BY. To get the final datetime, we just subtract the pending hours to the end time for that particular day.
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+-------------------------+
| Date | WorkingStartTime | WorkingEndTime | AmountBusinessHours | RollingBusinessHoursSum | PendingHours | EndingHour |
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+-------------------------+
| 2019-09-09 | 2019-09-09 08:00:00.000 | 2019-09-09 16:00:00.000 | 8 | 9 | -7 | 2019-09-09 09:00:00.000 |
+------------+-------------------------+-------------------------+---------------------+-------------------------+--------------+-------------------------+
You might have to tweak performance and do boundary tests but this might give you a flexible idea of how to work with working hours at holidays or different time periods.

SQL query for setting column based on last seven entries

Problem
I am having trouble figuring out how to create a query that can tell if any userentry is preceded by 7 days without any activity (secondsPlayed == 0) and if so, then indicate it with the value of 1, otherwise 0.
This also means that if the user has less than 7 entries, the value will be 0 across all entries.
Input table:
+------------------------------+-------------------------+---------------+
| userid | estimationDate | secondsPlayed |
+------------------------------+-------------------------+---------------+
| a | 2016-07-14 00:00:00 UTC | 192.5 |
| a | 2016-07-15 00:00:00 UTC | 357.3 |
| a | 2016-07-16 00:00:00 UTC | 0 |
| a | 2016-07-17 00:00:00 UTC | 0 |
| a | 2016-07-18 00:00:00 UTC | 0 |
| a | 2016-07-19 00:00:00 UTC | 0 |
| a | 2016-07-20 00:00:00 UTC | 0 |
| a | 2016-07-21 00:00:00 UTC | 0 |
| a | 2016-07-22 00:00:00 UTC | 0 |
| a | 2016-07-23 00:00:00 UTC | 0 |
| a | 2016-07-24 00:00:00 UTC | 0 |
| ---------------------------- | ---------------------- | ---- |
| b | 2016-07-02 00:00:00 UTC | 31.2 |
| b | 2016-07-03 00:00:00 UTC | 42.1 |
| b | 2016-07-04 00:00:00 UTC | 41.9 |
| b | 2016-07-05 00:00:00 UTC | 43.2 |
| b | 2016-07-06 00:00:00 UTC | 91.5 |
| b | 2016-07-07 00:00:00 UTC | 0 |
| b | 2016-07-08 00:00:00 UTC | 0 |
| b | 2016-07-09 00:00:00 UTC | 239.1 |
| b | 2016-07-10 00:00:00 UTC | 0 |
+------------------------------+-------------------------+---------------+
The intended output table should look like this:
Output table:
+------------------------------+-------------------------+---------------+----------+
| userid | estimationDate | secondsPlayed | inactive |
+------------------------------+-------------------------+---------------+----------+
| a | 2016-07-14 00:00:00 UTC | 192.5 | 0 |
| a | 2016-07-15 00:00:00 UTC | 357.3 | 0 |
| a | 2016-07-16 00:00:00 UTC | 0 | 0 |
| a | 2016-07-17 00:00:00 UTC | 0 | 0 |
| a | 2016-07-18 00:00:00 UTC | 0 | 0 |
| a | 2016-07-19 00:00:00 UTC | 0 | 0 |
| a | 2016-07-20 00:00:00 UTC | 0 | 0 |
| a | 2016-07-21 00:00:00 UTC | 0 | 0 |
| a | 2016-07-22 00:00:00 UTC | 0 | 1 |
| a | 2016-07-23 00:00:00 UTC | 0 | 1 |
| a | 2016-07-24 00:00:00 UTC | 0 | 1 |
| ---------------------------- | ----------------------- | ----- | ----- |
| b | 2016-07-02 00:00:00 UTC | 31.2 | 0 |
| b | 2016-07-03 00:00:00 UTC | 42.1 | 0 |
| b | 2016-07-04 00:00:00 UTC | 41.9 | 0 |
| b | 2016-07-05 00:00:00 UTC | 43.2 | 0 |
| b | 2016-07-06 00:00:00 UTC | 91.5 | 0 |
| b | 2016-07-07 00:00:00 UTC | 0 | 0 |
| b | 2016-07-08 00:00:00 UTC | 0 | 0 |
| b | 2016-07-09 00:00:00 UTC | 239.1 | 0 |
| b | 2016-07-10 00:00:00 UTC | 0 | 0 |
+------------------------------+-------------------------+---------------+----------+
Thoughts
At first I was thinking about using the Lag function with a 7 offset, but this would obviously not relate to any of the subjects in between.
I was also thinking about creating a rolling window/average for a period of 7 days and evaluating if this is above 0. However this might be a bit above my skill level.
Any chance anyone has a good solution to this problem.
Assuming that you have data every day (which seems like a reasonable assumption), you can sum a window function:
select t.*,
(case when sum(secondsplayed) over (partition by userid
order by estimationdate
rows between 6 preceding and current row
) = 0 and
row_number() over (partition by userid order by estimationdate) >= 7
then 1
else 0
end) as inactive
from t;
In addition to no holes in the dates, this also assumes that secondsplayed is never negative. (Negative values can easily be incorporated into the logic, but that seems unnecessary.)
In my experience this type of input tables do not consist of inactivity entries and usually look like this (only activity entries are present here)
Input table:
+------------------------------+-------------------------+---------------+
| userid | estimationDate | secondsPlayed |
+------------------------------+-------------------------+---------------+
| a | 2016-07-14 00:00:00 UTC | 192.5 |
| a | 2016-07-15 00:00:00 UTC | 357.3 |
| ---------------------------- | ---------------------- | ---- |
| b | 2016-07-02 00:00:00 UTC | 31.2 |
| b | 2016-07-03 00:00:00 UTC | 42.1 |
| b | 2016-07-04 00:00:00 UTC | 41.9 |
| b | 2016-07-05 00:00:00 UTC | 43.2 |
| b | 2016-07-06 00:00:00 UTC | 91.5 |
| b | 2016-07-09 00:00:00 UTC | 239.1 |
+------------------------------+-------------------------+---------------+
So, below is for BigQuery Standard SQL and input as above
#standardSQL
WITH `project.dataset.table` AS (
SELECT 'a' userid, TIMESTAMP '2016-07-14 00:00:00 UTC' estimationDate, 192.5 secondsPlayed UNION ALL
SELECT 'a', '2016-07-15 00:00:00 UTC', 357.3 UNION ALL
SELECT 'b', '2016-07-02 00:00:00 UTC', 31.2 UNION ALL
SELECT 'b', '2016-07-03 00:00:00 UTC', 42.1 UNION ALL
SELECT 'b', '2016-07-04 00:00:00 UTC', 41.9 UNION ALL
SELECT 'b', '2016-07-05 00:00:00 UTC', 43.2 UNION ALL
SELECT 'b', '2016-07-06 00:00:00 UTC', 91.5 UNION ALL
SELECT 'b', '2016-07-09 00:00:00 UTC', 239.1
), time_frame AS (
SELECT day
FROM UNNEST(GENERATE_DATE_ARRAY('2016-07-02', '2016-07-24')) day
)
SELECT
users.userid,
day,
IFNULL(secondsPlayed, 0) secondsPlayed,
CAST(1 - SIGN(SUM(IFNULL(secondsPlayed, 0))
OVER(
PARTITION BY users.userid
ORDER BY UNIX_DATE(day)
RANGE BETWEEN 6 PRECEDING AND CURRENT ROW
)) AS INT64) AS inactive
FROM time_frame tf
CROSS JOIN (SELECT DISTINCT userid FROM `project.dataset.table`) users
LEFT JOIN `project.dataset.table` t
ON day = DATE(estimationDate) AND users.userid = t.userid
ORDER BY userid, day
with result
Row userid day secondsPlayed inactive
...
13 a 2016-07-14 192.5 0
14 a 2016-07-15 357.3 0
15 a 2016-07-15 357.3 0
16 a 2016-07-16 0.0 0
17 a 2016-07-17 0.0 0
18 a 2016-07-18 0.0 0
19 a 2016-07-19 0.0 0
20 a 2016-07-20 0.0 0
21 a 2016-07-21 0.0 0
22 a 2016-07-22 0.0 1
23 a 2016-07-23 0.0 1
24 a 2016-07-24 0.0 1
25 b 2016-07-02 31.2 0
26 b 2016-07-03 42.1 0
27 b 2016-07-04 41.9 0
28 b 2016-07-05 43.2 0
29 b 2016-07-06 91.5 0
30 b 2016-07-07 0.0 0
31 b 2016-07-08 0.0 0
32 b 2016-07-09 239.1 0
33 b 2016-07-10 0.0 0
...

Oracle: Select parallel entries

I am searching the most efficient way to make a relatively complicated query in a relatively large table.
The concept is that:
I have a table that holds records of phases that can run parallel to each other
The amount of records exceeds the 5 millions (and increases)
The time period starts about 5 years ago
Due to performance reasons, this select could be applied on the last 3 months period of time with 300.000 records (only if it is not physically possible to do it for the whole table)
Oracle version: 11g
The data sample seems as following
Table Phases (ID, START_TS, END_TS, PRIO)
1 10:00:00 10:20:10 10
2 10:05:00 10:10:00 11
3 10:05:20 10:15:00 9
4 10:16:00 10:25:00 8
5 10:24:00 10:45:15 1
6 10:26:00 10:30:00 10
7 10:27:00 10:35:00 15
8 10:34:00 10:50:00 5
9 10:50:00 10:55:00 20
10 10:55:00 11:00:00 15
Above you can see how the information is currently stored (of course there are several other columns with irrelevant information).
There are two requirements (or problems to be solved)
If we sum the duration of all the phases, the result is MUCH more than an hour that the above data represent. (There could be holes between the phases, so taking the first start_ts and the last end_ts would not be sufficient).
The data should be displayed in a form that it would be visible which phases run parallel with which and which phase had the highest priority at each time, as shown in the expected view below
Here it is easy to distinct the highest priority phase at each time (HIGHEST_PRIO), and adding their duration would result the actual total duration.
View V_Parallel_Phases (ID, START_TS, END_TS, PRIO, HIGHEST_PRIO)
-> Optional Columns: Part_of_ID / Runs_Parallel
1 10:00:00 10:05:20 10 True (--> Part_1 / False)
1 10:05:20 10:15:00 10 False (--> Part_2 / True)
2 10:05:00 10:10:00 11 False (--> Part_1 / True)
3 10:05:20 10:15:00 9 True (--> Part_1 / True)
1 10:15:00 10:16:00 10 True (--> Part_3 / True)
1 10:16:00 10:20:10 10 False (--> Part_4 / True)
4 10:16:00 10:24:00 8 True (--> Part_1 / True)
4 10:24:00 10:25:00 8 False (--> Part_2 / True)
5 10:24:00 10:45:15 1 True (--> Part_1 / True)
6 10:26:00 10:30:00 10 False (--> Part_1 / True)
7 10:27:00 10:35:00 15 False (--> Part_1 / True)
8 10:34:00 10:45:15 5 False (--> Part_1 / True)
8 10:45:15 10:50:00 5 True (--> Part_2 / True)
9 10:50:00 10:55:00 20 True (--> Part_2 / False)
10 10:55:00 11:00:00 15 True (--> Part_2 / False)
Unfortunately I am not aware of an efficient way to make this query. The current solution was to make the above calculations programmatically in the tool that generates a large report but it was a total failure. From the 30 seconds that were needed before this calculations, now it needs over 10 minutes without taking event into consideration the priorities of the phases..
Then I thought of translating this code into sql in either: a) a view b) a materialized view c) a table that I would fill with a procedure once in a while (depending on the required duration).
PS: I am aware that oracle has some analytical functions that can handle complicated queries but I am not aware of which could actually help me in the current problem.
Thank you in advance!
This is an incomplete answer, but I need to know if this approach is viable before going on. I believe it is possible to do completely in SQL, but I am not sure how the performance will be.
First find out all points in time where there is a transition:
CREATE VIEW Events AS
SELECT START_TS AS TS
FROM Phases
UNION
SELECT END_TS AS TS
FROM Phases
;
Then create (start, end) tuples from those points in time:
CREATE VIEW Segments AS
SELECT START.TS AS START_TS,
MIN(END.TS) AS END_TS
FROM Events AS START
JOIN Events AS END
WHERE START.TS < END.TS
;
From here on, doing the rest should be fairly straight forward. Here is a query that lists the segments and all the phases that are active in the given segment:
SELECT *
FROM Segments
JOIN Phases
WHERE Segments.START_TS BETWEEN Phases.START_TS AND Phases.END_TS
AND Segments.END_TS BETWEEN Phases.START_TS AND Phases.END_TS
ORDER BY Segments.START_TS
;
The rest can be done with subselects and some aggregates.
| START_TS | END_TS | ID | START_TS | END_TS | PRIO |
|----------|----------|----|----------|----------|------|
| 10:00:00 | 10:05:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:00 | 10:05:20 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:00 | 10:05:20 | 2 | 10:05:00 | 10:10:00 | 11 |
| 10:05:20 | 10:10:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:05:20 | 10:10:00 | 2 | 10:05:00 | 10:10:00 | 11 |
| 10:05:20 | 10:10:00 | 3 | 10:05:20 | 10:15:00 | 9 |
| 10:10:00 | 10:15:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:10:00 | 10:15:00 | 3 | 10:05:20 | 10:15:00 | 9 |
| 10:15:00 | 10:16:00 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:16:00 | 10:20:10 | 1 | 10:00:00 | 10:20:10 | 10 |
| 10:16:00 | 10:20:10 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:20:10 | 10:24:00 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:24:00 | 10:25:00 | 4 | 10:16:00 | 10:25:00 | 8 |
| 10:24:00 | 10:25:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:25:00 | 10:26:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:26:00 | 10:27:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:26:00 | 10:27:00 | 6 | 10:26:00 | 10:30:00 | 10 |
| 10:27:00 | 10:30:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:27:00 | 10:30:00 | 6 | 10:26:00 | 10:30:00 | 10 |
| 10:27:00 | 10:30:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:30:00 | 10:34:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:30:00 | 10:34:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:34:00 | 10:35:00 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:34:00 | 10:35:00 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:34:00 | 10:35:00 | 7 | 10:27:00 | 10:35:00 | 15 |
| 10:35:00 | 10:45:15 | 5 | 10:24:00 | 10:45:15 | 1 |
| 10:35:00 | 10:45:15 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:45:15 | 10:50:00 | 8 | 10:34:00 | 10:50:00 | 5 |
| 10:50:00 | 10:55:00 | 9 | 10:50:00 | 10:55:00 | 20 |
| 10:55:00 | 11:00:00 | 10 | 10:55:00 | 11:00:00 | 15 |
There is a SQL fiddle demonstrating the whole thing here:
http://sqlfiddle.com/#!9/d801b/2