Calculate working hours between 2 dates in PostgreSQL - sql

I am developing an algorithm with Postgres (PL/pgSQL) and I need to calculate the number of working hours between 2 timestamps, taking into account that weekends are not working and the rest of the days are counted only from 8am to 15pm.
Examples:
From Dec 3rd at 14pm to Dec 4th at 9am should count 2 hours:
3rd = 1, 4th = 1
From Dec 3rd at 15pm to Dec 7th at 8am should count 8 hours:
3rd = 0, 4th = 8, 5th = 0, 6th = 0, 7th = 0
It would be great to consider hour fractions as well.

According to your question working hours are: Mo–Fr, 08:00–15:00.
Rounded results
For just two given timestamps
Operating on units of 1 hour. Fractions are ignored, therefore not precise but simple:
SELECT count(*) AS work_hours
FROM generate_series (timestamp '2013-06-24 13:30'
, timestamp '2013-06-24 15:29' - interval '1h'
, interval '1h') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00';
The function generate_series() generates one row if the end is greater than the start and another row for every full given interval (1 hour). This wold count every hour entered into. To ignore fractional hours, subtract 1 hour from the end. And don't count hours starting before 14:00.
Use the field pattern ISODOW instead of DOW for EXTRACT() to simplify expressions. Returns 7 instead of 0 for Sundays.
A simple (and very cheap) cast to time makes it easy to identify qualifying hours.
Fractions of an hour are ignored, even if fractions at begin and end of the interval would add up to an hour or more.
For a whole table
CREATE TABLE t (t_id int PRIMARY KEY, t_start timestamp, t_end timestamp);
INSERT INTO t VALUES
(1, '2009-12-03 14:00', '2009-12-04 09:00')
, (2, '2009-12-03 15:00', '2009-12-07 08:00') -- examples in question
, (3, '2013-06-24 07:00', '2013-06-24 12:00')
, (4, '2013-06-24 12:00', '2013-06-24 23:00')
, (5, '2013-06-23 13:00', '2013-06-25 11:00')
, (6, '2013-06-23 14:01', '2013-06-24 08:59') -- max. fractions at begin and end
;
Query:
SELECT t_id, count(*) AS work_hours
FROM (
SELECT t_id, generate_series (t_start, t_end - interval '1h', interval '1h') AS h
FROM t
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00'
GROUP BY 1
ORDER BY 1;
db<>fiddle here
Old sqlfiddle
More precision
To get more precision you can use smaller time units. 5-minute slices for instance:
SELECT t_id, count(*) * interval '5 min' AS work_interval
FROM (
SELECT t_id, generate_series (t_start, t_end - interval '5 min', interval '5 min') AS h
FROM t
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:55' -- 15.00 - interval '5 min'
GROUP BY 1
ORDER BY 1;
The smaller the unit the higher the cost.
Cleaner with LATERAL in Postgres 9.3+
In combination with the new LATERAL feature in Postgres 9.3, the above query can then be written as:
1-hour precision:
SELECT t.t_id, h.work_hours
FROM t
LEFT JOIN LATERAL (
SELECT count(*) AS work_hours
FROM generate_series (t.t_start, t.t_end - interval '1h', interval '1h') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00'
) h ON TRUE
ORDER BY 1;
5-minute precision:
SELECT t.t_id, h.work_interval
FROM t
LEFT JOIN LATERAL (
SELECT count(*) * interval '5 min' AS work_interval
FROM generate_series (t.t_start, t.t_end - interval '5 min', interval '5 min') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:55'
) h ON TRUE
ORDER BY 1;
This has the additional advantage that intervals containing zero working hours are not excluded from the result like in the above versions.
More about LATERAL:
Find most common elements in array with a group by
Insert multiple rows in one table based on number in another table
Exact results
Postgres 8.4+
Or you deal with start and end of the time frame separately to get exact results to the microsecond. Makes the query more complex, but cheaper and exact:
WITH var AS (SELECT '08:00'::time AS v_start
, '15:00'::time AS v_end)
SELECT t_id
, COALESCE(h.h, '0') -- add / subtract fractions
- CASE WHEN EXTRACT(ISODOW FROM t_start) < 6
AND t_start::time > v_start
AND t_start::time < v_end
THEN t_start - date_trunc('hour', t_start)
ELSE '0'::interval END
+ CASE WHEN EXTRACT(ISODOW FROM t_end) < 6
AND t_end::time > v_start
AND t_end::time < v_end
THEN t_end - date_trunc('hour', t_end)
ELSE '0'::interval END AS work_interval
FROM t CROSS JOIN var
LEFT JOIN ( -- count full hours, similar to above solutions
SELECT t_id, count(*)::int * interval '1h' AS h
FROM (
SELECT t_id, v_start, v_end
, generate_series (date_trunc('hour', t_start)
, date_trunc('hour', t_end) - interval '1h'
, interval '1h') AS h
FROM t, var
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= v_start
AND h::time <= v_end - interval '1h'
GROUP BY 1
) h USING (t_id)
ORDER BY 1;
db<>fiddle here
Old sqlfiddle
Postgres 9.2+ with tsrange
The new range types offer a more elegant solution for exact results in combination with the intersection operator *:
Simple function for time ranges spanning only one day:
CREATE OR REPLACE FUNCTION f_worktime_1day(_start timestamp, _end timestamp)
RETURNS interval
LANGUAGE sql IMMUTABLE AS
$func$ -- _start & _end within one calendar day! - you may want to check ...
SELECT CASE WHEN extract(ISODOW from _start) < 6 THEN (
SELECT COALESCE(upper(h) - lower(h), '0')
FROM (
SELECT tsrange '[2000-1-1 08:00, 2000-1-1 15:00)' -- hours hard coded
* tsrange( '2000-1-1'::date + _start::time
, '2000-1-1'::date + _end::time ) AS h
) sub
) ELSE '0' END
$func$;
If your ranges never span multiple days, that's all you need.
Else, use this wrapper function to deal with any interval:
CREATE OR REPLACE FUNCTION f_worktime(_start timestamp
, _end timestamp
, OUT work_time interval)
LANGUAGE plpgsql IMMUTABLE AS
$func$
BEGIN
CASE _end::date - _start::date -- spanning how many days?
WHEN 0 THEN -- all in one calendar day
work_time := f_worktime_1day(_start, _end);
WHEN 1 THEN -- wrap around midnight once
work_time := f_worktime_1day(_start, NULL)
+ f_worktime_1day(_end::date, _end);
ELSE -- multiple days
work_time := f_worktime_1day(_start, NULL)
+ f_worktime_1day(_end::date, _end)
+ (SELECT count(*) * interval '7:00' -- workday hard coded!
FROM generate_series(_start::date + 1
, _end::date - 1, '1 day') AS t
WHERE extract(ISODOW from t) < 6);
END CASE;
END
$func$;
Call:
SELECT t_id, f_worktime(t_start, t_end) AS worktime
FROM t
ORDER BY 1;
db<>fiddle here
Old sqlfiddle

How about this: create a small table with 24*7 rows, one row for each hour in a week.
CREATE TABLE hours (
hour timestamp not null,
is_working boolean not null
);
INSERT INTO hours (hour, is_working) VALUES
('2009-11-2 00:00:00', false),
('2009-11-2 01:00:00', false),
. . .
('2009-11-2 08:00:00', true),
. . .
('2009-11-2 15:00:00', true),
('2009-11-2 16:00:00', false),
. . .
('2009-11-2 23:00:00', false);
Likewise add 24 rows for each of the other days. It doesn't matter what year or month you give, as you'll see in a moment. You just need to represent all seven days of the week.
SELECT t.id, t.start, t.end, SUM(CASE WHEN h.is_working THEN 1 ELSE 0 END) AS hours_worked
FROM mytable t JOIN hours h
ON (EXTRACT(DOW FROM TIMESTAMP h.hour) BETWEEN EXTRACT(DOW FROM TIMESTAMP t.start)
AND EXTRACT(DOW FROM TIMESTAMP t.end))
AND (EXTRACT(DOW FROM TIMESTAMP h.hour) > EXTRACT(DOW FROM TIMESTAMP t.start)
OR EXTRACT(HOUR FROM TIMESTAMP h.hour) >= EXTRACT(HOUR FROM TIMESTAMP t.start))
AND (EXTRACT(DOW FROM TIMESTAMP h.hour) < EXTRACT(DOW FROM TIMESTAMP t.end)
OR EXTRACT(HOUR FROM TIMESTAMP h.hour) <= EXTRACT(HOUR FROM TIMESTAMP t.end))
GROUP BY t.id, t.start, t.end;

This following functions will take the input for the
working start time of the day
working end time of the day
start time
end time
-- helper function
CREATE OR REPLACE FUNCTION get_working_time_in_a_day(sdt TIMESTAMP, edt TIMESTAMP, swt TIME, ewt TIME) RETURNS INT AS
$$
DECLARE
sd TIMESTAMP; ed TIMESTAMP; swdt TIMESTAMP; ewdt TIMESTAMP; seconds INT;
BEGIN
swdt = sdt::DATE || ' ' || swt; -- work start datetime for a day
ewdt = sdt::DATE || ' ' || ewt; -- work end datetime for a day
IF (sdt < swdt AND edt <= swdt) -- case 1 and 2
THEN
seconds = 0;
END IF;
IF (sdt < swdt AND edt > swdt AND edt <= ewdt) -- case 3 and 4
THEN
seconds = EXTRACT(EPOCH FROM (edt - swdt));
END IF;
IF (sdt < swdt AND edt > swdt AND edt > ewdt) -- case 5
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - swdt));
END IF;
IF (sdt = swdt AND edt > swdt AND edt <= ewdt) -- case 6 and 7
THEN
seconds = EXTRACT(EPOCH FROM (edt - sdt));
END IF;
IF (sdt = swdt AND edt > ewdt) -- case 8
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - sdt));
END IF;
IF (sdt > swdt AND edt <= ewdt) -- case 9 and 10
THEN
seconds = EXTRACT(EPOCH FROM (edt - sdt));
END IF;
IF (sdt > swdt AND sdt < ewdt AND edt > ewdt) -- case 11
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - sdt));
END IF;
IF (sdt >= ewdt AND edt > ewdt) -- case 12 and 13
THEN
seconds = 0;
END IF;
RETURN seconds;
END;
$$
LANGUAGE plpgsql;
-- Get work time difference
CREATE OR REPLACE FUNCTION get_working_time(sdt TIMESTAMP, edt TIMESTAMP, swt TIME, ewt TIME) RETURNS INT AS
$$
DECLARE
seconds INT = 0;
strst VARCHAR(9) = ' 00:00:00';
stret VARCHAR(9) = ' 23:59:59';
tend TIMESTAMP; tempEdt TIMESTAMP;
x int;
BEGIN
<<test>>
WHILE sdt <= edt LOOP
tend = sdt::DATE || stret; -- get the false end datetime for start time
IF edt >= tend
THEN
tempEdt = tend;
ELSE
tempEdt = edt;
END IF;
-- skip saturday and sunday
x = EXTRACT(DOW FROM sdt);
if (x > 0 AND x < 6)
THEN
seconds = seconds + get_working_time_in_a_day(sdt, tempEdt, swt, ewt);
ELSE
-- RAISE NOTICE 'MISSED A DAY';
END IF;
sdt = (sdt + (INTERVAL '1 DAY'))::DATE || strst;
END LOOP test;
--RAISE NOTICE 'diff in minutes = %', (seconds / 60);
RETURN seconds;
END;
$$
LANGUAGE plpgsql;
-- Table Definition
DROP TABLE IF EXISTS test_working_time;
CREATE TABLE test_working_time(
pk SERIAL PRIMARY KEY,
start_datetime TIMESTAMP,
end_datetime TIMESTAMP,
start_work_time TIME,
end_work_time TIME
);
-- Test data insertion
INSERT INTO test_working_time VALUES
(1, '2015-11-03 01:00:00', '2015-11-03 07:00:00', '08:00:00', '22:00:00'),
(2, '2015-11-03 01:00:00', '2015-11-04 07:00:00', '08:00:00', '22:00:00'),
(3, '2015-11-03 01:00:00', '2015-11-05 07:00:00', '08:00:00', '22:00:00'),
(4, '2015-11-03 01:00:00', '2015-11-06 07:00:00', '08:00:00', '22:00:00'),
(5, '2015-11-03 01:00:00', '2015-11-07 07:00:00', '08:00:00', '22:00:00'),
(6, '2015-11-03 01:00:00', '2015-11-03 08:00:00', '08:00:00', '22:00:00'),
(7, '2015-11-03 01:00:00', '2015-11-04 08:00:00', '08:00:00', '22:00:00'),
(8, '2015-11-03 01:00:00', '2015-11-05 08:00:00', '08:00:00', '22:00:00'),
(9, '2015-11-03 01:00:00', '2015-11-06 08:00:00', '08:00:00', '22:00:00'),
(10, '2015-11-03 01:00:00', '2015-11-07 08:00:00', '08:00:00', '22:00:00'),
(11, '2015-11-03 01:00:00', '2015-11-03 11:00:00', '08:00:00', '22:00:00'),
(12, '2015-11-03 01:00:00', '2015-11-04 11:00:00', '08:00:00', '22:00:00'),
(13, '2015-11-03 01:00:00', '2015-11-05 11:00:00', '08:00:00', '22:00:00'),
(14, '2015-11-03 01:00:00', '2015-11-06 11:00:00', '08:00:00', '22:00:00'),
(15, '2015-11-03 01:00:00', '2015-11-07 11:00:00', '08:00:00', '22:00:00'),
(16, '2015-11-03 01:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(17, '2015-11-03 01:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(18, '2015-11-03 01:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(19, '2015-11-03 01:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(20, '2015-11-03 01:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(21, '2015-11-03 01:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(22, '2015-11-03 01:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(23, '2015-11-03 01:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(24, '2015-11-03 01:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(25, '2015-11-03 01:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(26, '2015-11-03 08:00:00', '2015-11-03 11:00:00', '08:00:00', '22:00:00'),
(27, '2015-11-03 08:00:00', '2015-11-04 11:00:00', '08:00:00', '22:00:00'),
(28, '2015-11-03 08:00:00', '2015-11-05 11:00:00', '08:00:00', '22:00:00'),
(29, '2015-11-03 08:00:00', '2015-11-06 11:00:00', '08:00:00', '22:00:00'),
(30, '2015-11-03 08:00:00', '2015-11-07 11:00:00', '08:00:00', '22:00:00'),
(31, '2015-11-03 08:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(32, '2015-11-03 08:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(33, '2015-11-03 08:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(34, '2015-11-03 08:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(35, '2015-11-03 08:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(36, '2015-11-03 08:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(37, '2015-11-03 08:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(38, '2015-11-03 08:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(39, '2015-11-03 08:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(40, '2015-11-03 08:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(41, '2015-11-03 12:00:00', '2015-11-03 18:00:00', '08:00:00', '22:00:00'),
(42, '2015-11-03 12:00:00', '2015-11-04 18:00:00', '08:00:00', '22:00:00'),
(43, '2015-11-03 12:00:00', '2015-11-05 18:00:00', '08:00:00', '22:00:00'),
(44, '2015-11-03 12:00:00', '2015-11-06 18:00:00', '08:00:00', '22:00:00'),
(45, '2015-11-03 12:00:00', '2015-11-07 18:00:00', '08:00:00', '22:00:00'),
(46, '2015-11-03 12:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(47, '2015-11-03 12:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(48, '2015-11-03 12:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(49, '2015-11-03 12:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(50, '2015-11-03 12:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(51, '2015-11-03 12:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(52, '2015-11-03 12:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(53, '2015-11-03 12:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(54, '2015-11-03 12:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(55, '2015-11-03 12:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(56, '2015-11-03 22:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(57, '2015-11-03 22:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(58, '2015-11-03 22:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(59, '2015-11-03 22:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(60, '2015-11-03 22:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(61, '2015-11-03 22:30:00', '2015-11-03 23:30:00', '08:00:00', '22:00:00'),
(62, '2015-11-03 22:30:00', '2015-11-04 23:30:00', '08:00:00', '22:00:00'),
(63, '2015-11-03 22:30:00', '2015-11-05 23:30:00', '08:00:00', '22:00:00'),
(64, '2015-11-03 22:30:00', '2015-11-06 23:30:00', '08:00:00', '22:00:00'),
(65, '2015-11-03 22:30:00', '2015-11-07 23:30:00', '08:00:00', '22:00:00');
-- select query to get work time difference
SELECT
start_datetime,
end_datetime,
start_work_time,
end_work_time,
get_working_time(start_datetime, end_datetime, start_work_time, end_work_time) AS diff_in_minutes
FROM
test_working_time;
This will give the difference of only the work hours in seconds between the start and end datetime

Related

Get Period normalized

Any help is greatly appreciated in following question. I have weekly schedule of shop but shop may be open certain days of the week instead of whole week. So i need to calculate working days of shop.
Week starts from Sunday.
CREATE OR REPLACE TEMP TABLE fact_shop_schedule (
shop_id varchar ,
shop_start_time timestamp ,
shop_end_time timestamp ,
schedule_start_time varchar ,
schedule_end_time varchar ,
day_of_week number
);
INSERT INTO fact_shop_schedule (shop_id, shop_start_time, shop_end_time, schedule_start_time, schedule_end_time, day_of_week)
VALUES
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'MONDAY 00:00', 'MONDAY 23:59', 1),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'TUESDAY 00:00', 'TUESDAY 23:59', 2),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'WEDNESDAY 00:00', 'WEDNESDAY 23:59', 3),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'THURSDAY 00:00', 'THURSDAY 23:59', 4),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'FRIDAY 00:00', 'FRIDAY 23:59', 5),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'SATURDAY 00:00', 'SATURDAY 23:59', 6),
(1000, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'SUNDAY 00:00', 'SUNDAY 23:59', 7),
(1001, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'TUESDAY 00:00', 'TUESDAY 23:59', 2),
(1001, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'WEDNESDAY 00:00', 'WEDNESDAY 23:59', 3),
(1001, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'THURSDAY 00:00', 'THURSDAY 23:59', 4),
(1001, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'SATURDAY 00:00', 'SATURDAY 23:59', 6),
(1002, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'MONDAY 00:00', 'MONDAY 23:59', 1),
(1002, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'TUESDAY 00:00', 'TUESDAY 23:59', 2),
(1002, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'WEDNESDAY 00:00', 'WEDNESDAY 23:59', 3),
(1002, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'FRIDAY 00:00', 'FRIDAY 23:59', 5),
(1002, '2022-04-26 07:00:00'::timestamp, '2022-05-01 06:59:00'::timestamp, 'SATURDAY 00:00', 'SATURDAY 23:59', 6)
;
CREATE OR REPLACE TEMP TABLE temp_source
AS
SELECT
t2.shop_id AS shop_id
, t2.shop_start_time AS start_time
, t2.shop_end_time AS end_time
, t2.schedule_start_time AS schedule_start_time
, t2.schedule_end_time AS schedule_end_time
, regexp_substr(t2.schedule_start_time, '^[^, ]*') AS schedule_start_day
, regexp_substr(t2.schedule_end_time, '^[^, ]*') AS schedule_end_day
, dd.day AS day
, UPPER(coalesce(dd.day_name,'Sunday')) AS day_name
, dd.day_of_week AS day_of_week
, dd.week AS week
, dd.last_day_of_week AS last_day_of_week
, dd.last_day_of_month AS last_day_of_month
FROM fact_shop_schedule t2
INNER JOIN dimension_date dd ON t2.shop_start_time::date >= dd.week::date AND t2.shop_start_time::date <= dd.last_day_of_week::date
AND regexp_substr(t2.schedule_start_time, '^[^, ]*') = UPPER(coalesce(dd.day_name,'Sunday'))
WHERE 1=1
ORDER BY dd.week, day_of_week;
CREATE OR REPLACE TEMP TABLE temp_cte
AS
SELECT
shop_id
, day
, start_time
, end_time
, schedule_start_time
, schedule_end_time
, schedule_start_day
, schedule_end_day
, DATEADD(DAY, -ROW_NUMBER() OVER(PARTITION BY shop_id ORDER BY day), day) AS GroupingSet
FROM temp_source;
SELECT
shop_id AS shop_id
, MIN(day) AS start_date
, MAX(day) AS end_date
, CASE
WHEN end_date>start_date THEN datediff(day,start_date,end_date)+1
WHEN end_date=start_date THEN 1
END AS working_days
FROM temp_cte
WHERE 1=1
GROUP BY shop_id, GroupingSet
ORDER BY shop_id, start_date;

dates in postgres

I want to see how long the client spend time connecting to our website daily.
My table source in created as below and contains the data as shown below.
CREATE TABLE source_ (
"nbr" numeric (10),
"begdate" timestamp,
"enddate" timestamp,
"str" varchar(35))
;
INSERT INTO source_
("nbr", "begdate", "enddate", "str")
VALUES
(111, '2019-11-25 07:00:00', '2019-11-25 08:00:00', 'TMP123'),
(222, '2019-03-01 12:04:02', '2019-03-01 12:05:02', 'SOC'),
(111, '2019-11-25 19:00:00', '2019-11-25 19:30:00', 'TMP12'),
(444, '2020-02-11 22:00:00', '2020-02-12 02:00:00', 'MARATEN'),
(444, '2020-02-11 23:00:00', '2020-02-12 01:00:00', 'MARA12'),
(444, '2020-02-12 13:00:00', '2020-02-12 14:00:00', 'MARA12'),
(444, '2020-02-12 07:00:00', '2020-02-12 08:00:00', 'MARA1222')
;
create table target_ (nbr numeric (10), date_ int(10), state varchar(30), terms interval);
I did an attempt below, but as you can see i associated the date_ (day of the event) to the beddate which is not always true see (4th row) when the event is between two days.
INSERT INTO target_
(nbr, date_, state, terms)
select
nbr,
DATE_TRUNC('day', begdate) as date_,
state,
sum(term) as terms
from (
select
nbr, begdate,
(case
when trim(str) ~ '^TMP' then 'TMP'
when trim(str) ~ '^MARA' then 'MARATEN'
else 'SOC'
end) as state,
(enddate - begdate)as term from source_ ) X
group by nbr, date_, state;
expected output
111 2019-11-25 00:00:00+00 TMP 90
222 2019-03-01 00:00:00+00 SOC 60
444 2020-02-11 00:00:00+00 MARATEN 180
444 2020-02-12 00:00:00+00 MARATEN 300
If I understand correctly, you can use generate_series() to expand the periods and then aggregate:
select gs.dte,
(case when trim(str) ~ '^TMP' then 'TMP'
when trim(str) ~ '^MARA' then 'MARATEN'
else 'SOC'
end) as state,
sum( least(s.enddate, gs.dte + interval '1 day') - greatest(s.begdate, gs.dte))
from source s cross join lateral
generate_series(begdate::date, enddate::date, interval '1 day') gs(dte)
group by state, gs.dte
order by gs.dte, state;
Here is a db<>fiddle.

How to resample DatetimeIndex in Pandas?

How can I resample DatetimeIndex objects in Pandas? Assume I have some existing DatetimeIndex object called oldindex. I would like to have the index that would be the result if I ran:
newindex = pd.Series(index=oldindex, data=None).resample('H').sum().index
But this solution does unnecessary computation (i.e., constructs a series and calculates sums) and it just looks ugly. Unfortunately, newindex = oldindex.resample('H') doesn't work, although I don't see any reason why it or something similar couldn't work in principle. For resampling the index, it doesn't matter what the operation (sum, mean, ffill, ...) would be.
Resampling is to change the data, if you have no data to change you can just create a new index with the required frequency that covers the original range:
>>> oldindex
DatetimeIndex(['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04',
'2020-01-05', '2020-01-06', '2020-01-07', '2020-01-08',
'2020-01-09', '2020-01-10'],
dtype='datetime64[ns]', freq='D')
>>> newindex = pd.date_range(start=oldindex[0], end=oldindex[-1], freq='H')
>>> newindex
DatetimeIndex(['2020-01-01 00:00:00', '2020-01-01 01:00:00',
'2020-01-01 02:00:00', '2020-01-01 03:00:00',
'2020-01-01 04:00:00', '2020-01-01 05:00:00',
'2020-01-01 06:00:00', '2020-01-01 07:00:00',
'2020-01-01 08:00:00', '2020-01-01 09:00:00',
...
'2020-01-09 15:00:00', '2020-01-09 16:00:00',
'2020-01-09 17:00:00', '2020-01-09 18:00:00',
'2020-01-09 19:00:00', '2020-01-09 20:00:00',
'2020-01-09 21:00:00', '2020-01-09 22:00:00',
'2020-01-09 23:00:00', '2020-01-10 00:00:00'],
dtype='datetime64[ns]', length=217, freq='H')
(instead of oldindex[0] and oldindex[-1] you could also use oldindex.min() and oldindex.max() for clarity)

Join Events Between Time Intervals and Get the Optimal Time Difference?

The scenario is this: Within this time grid, join on documentation that has a time difference in minutes less than or equal to what is allowed in the time block. Display the time the event occurred and the related documented EventValue.
The solution I have now takes the first time stamp in each check interval, however that falls apart when there is documentation very early in the check interval and also very late in the same check interval. For an example see the TimeDiff column and then check the data in the Events table.
How can I join on one row in each time block that is the "optimal" time difference for the allowed time in each check interval?
DBMS is SSMS 2016
DDL:
CREATE TABLE TimeGrid
([PersonID] int, [TimeBlockCD] varchar(6), [TimeBlockNBR] int, [CheckInterval] datetime, [NextCheckInterval] datetime)
;
INSERT INTO TimeGrid
([PersonID], [TimeBlockCD], [TimeBlockNBR], [CheckInterval], [NextCheckInterval])
VALUES
(123456, '5 min', 5, '2019-11-12 08:50:00', '2019-11-12 08:55:00'),
(123456, '5 min', 5, '2019-11-12 08:55:00', '2019-11-12 09:00:00'),
(123456, '5 min', 5, '2019-11-12 09:00:00', '2019-11-12 09:05:00'),
(123456, '15 min', 15, '2019-11-12 09:05:00', '2019-11-12 09:20:00'),
(123456, '15 min', 15, '2019-11-12 09:20:00', '2019-11-12 09:35:00'),
(123456, '30 min', 30, '2019-11-12 09:35:00', '2019-11-12 10:05:00')
;
CREATE TABLE Events
([PersonID] int, [EventDTS] datetime, [EventDSC] varchar(6), [EventTypeID] int, [EventValue] int)
;
INSERT INTO Events
([PersonID], [EventDTS], [EventDSC], [EventTypeID], [EventValue])
VALUES
(123456, '2019-11-12 09:05:00', 'Event3', 3, 316),
(123456, '2019-11-12 08:56:00', 'Event3', 3, 747),
(123456, '2019-11-12 08:59:00', 'Event3', 3, 343),
(123456, '2019-11-12 09:03:00', 'Event3', 3, 228)
;
My Attempt:
SELECT
tg.PersonID
,tg.TimeBlockCD
,tg.TimeBlockNBR
,tg.CheckInterval
,tg.NextCheckInterval
,e.EventDTS
,e.EventValue
,DATEDIFF(minute,LAG(e.EventDTS,1,tg.CheckInterval) over (PARTITION BY tg.PersonID ORDER BY e.EventDTS),e.EventDTS) as EventTimeDiff
FROM TimeGrid tg
OUTER APPLY (SELECT
e.PersonID
,e.EventDTS
,e.EventDSC
,e.EventTypeID
,e.EventValue
,ROW_NUMBER() OVER (PARTITION BY e.PersonID ORDER BY e.EventDTS) as RowNBR
FROM Events e
WHERE 1=1
and e.PersonID = tg.PersonID
and e.EventTypeID = 3
and e.EventDTS between tg.CheckInterval and tg.NextCheckInterval
) e
WHERE 1=1
and (e.RowNBR is null or e.RowNBR = 1)
ORDER BY tg.CheckInterval

sum before joining two table

CREATE TABLE Daily
([DATE] datetime, [sales] int)
;
INSERT INTO Daily
([DATE], [sales])
VALUES
('2012-01-01 00:00:00', 1),
('2012-01-02 00:00:00', 2),
('2012-01-03 00:00:00', 3),
('2012-01-04 00:00:00', 4),
('2012-01-05 00:00:00', 5),
('2012-01-06 00:00:00', 6),
('2012-01-06 00:00:00', 5),
('2012-01-07 00:00:00', 7),
('2012-01-08 00:00:00', 8),
('2012-01-09 00:00:00', 9),
('2012-01-10 00:00:00', 10),
('2012-01-11 00:00:00', 11),
('2012-01-12 00:00:00', 12),
('2012-01-13 00:00:00', 13),
('2012-01-14 00:00:00', 14),
('2012-01-15 00:00:00', 15),
('2012-01-16 00:00:00', 16)
;
CREATE TABLE Weekly
([Weekly] datetime)
;
INSERT INTO Weekly
([Weekly])
VALUES
('2012-01-07 00:00:00'),
('2012-01-14 00:00:00'),
('2012-01-21 00:00:00')
;
i want the final output
Sales
1/7/2012 33
1/14/2012 77
any help on this would be appreciated. thanks in advance
I would strongly reccommend against storing this in a table, if any of your daily data changes your weekly data will need to be changed to or it will be wrong, instead create a view as follows:
CREATE VIEW Weekly
AS
SELECT WeekEnd = DATEADD(WEEK, DATEDIFF(WEEK, 0, [DATE]) + 1, -2),
Sales = SUM(Sales)
FROM Daily
GROUP BY DATEADD(WEEK, DATEDIFF(WEEK, 0, [DATE]) + 1, -2);
You can use this in the same way you would the table you want create, but this will always be in sync with the daily data. If you want to change your week start/end day (i.e. monday-sunday) you can change the -2 in the DATEADD function to alter this.
Example on SQL Fiddle
(Based on the [] around column names I am guessing this is SQL-Server.)