TLDR;
What is the equivalent of the following Python code snippet in PostgreSQL?
df.groupby('column').apply(function)
Where df is a Pandas DataFrame instance.
Context
I am used to the Split-Apply-Combine Paradigm in Python and want to apply the same framework in PostgreSQL.
Suppose I have the following table with a time, place, and measurement:
CREATE TABLE IF NOT EXISTS test_table (place VARCHAR(10), time TIMESTAMP, measurement FLOAT);
TRUNCATE test_table;
INSERT INTO test_table
VALUES ('A', TO_TIMESTAMP('2022-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'), 1.0),
('A', TO_TIMESTAMP('2022-01-01 00:15:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('A', TO_TIMESTAMP('2022-01-01 00:30:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('A', TO_TIMESTAMP('2022-01-01 00:45:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('A', TO_TIMESTAMP('2022-01-01 01:00:00', 'YYYY-MM-DD HH24:MI:SS'), 3.0),
('B', TO_TIMESTAMP('2022-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'), 3.0),
('B', TO_TIMESTAMP('2022-01-01 00:15:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('B', TO_TIMESTAMP('2022-01-01 00:30:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('B', TO_TIMESTAMP('2022-01-01 00:45:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('B', TO_TIMESTAMP('2022-01-01 01:00:00', 'YYYY-MM-DD HH24:MI:SS'), 3.0),
('C', TO_TIMESTAMP('2022-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS'), 1.0),
('C', TO_TIMESTAMP('2022-01-01 00:15:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('C', TO_TIMESTAMP('2022-01-01 00:30:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('C', TO_TIMESTAMP('2022-01-01 00:45:00', 'YYYY-MM-DD HH24:MI:SS'), 2.0),
('C', TO_TIMESTAMP('2022-01-01 01:00:00', 'YYYY-MM-DD HH24:MI:SS'), 3.0);
Associated to each place is a timeseries. Sometimes the measurement device at a given place will flatline and my goal is to move the flatline rows to another table.
I have written the following query to get the table I want for a given place.
SELECT place, time, measurement
FROM (SELECT place, time, measurement, LEAD(measurement, 1) OVER (ORDER BY time) AS leading_measurement
FROM test_table WHERE place = 'A') q
WHERE NOT measurement = leading_measurement OR leading_measurement IS NULL;
Within the query I had a subquery with a clause WHERE place = 'A' because the cleaning algorithm would detect the last row of A as being the same as the first row of B and I would not want to remove that row. But this also means I have not "cleaned" the other two places.
In theory I would like to GROUP BY place run the query, and union the results. However the GROUP BY clause only supports aggregate functions and my query returns multiple rows. I want to somehow SELECT DISTINCT(place) and run a for loop over that list. The Python equivalent is:
test_table.groupby('place') \
.apply(lambda place:
place[place["measurement"].diff(1) != 0])
I can (and have) been using Python to export to CSV then import the CSV into PSQL tables but my boss would greatly appreciate it done in PostgreSQL.
I would appreciate any guidence, even a link to a relevant manual page.
Do not give up on SQL quite so quickly.
If I understand correctly you want to to copy to another table then delete those rows which have the same measurement by place and time. So for Place A move then delete the rows with times 00:30:00 and 00:40:00. This is because those are the same as time stamp 00:15:00. With similar copy and delete for Place B and Place C.
If this is correct then it is quite easily done in SQL, at least Postgres SQL. It can be done in a single statement. Postgres SQL permits DML operations is a CTE. This allows you write a CTE to identify the rows to process in the first table, then use that CTE to insert those rows onto the second table, and finally use the same CTE to delete from the first table. (See demo)
with to_move(rctid) as
( select ctid
from (select place, time, measurement
, lead(measurement, 1) over (partition by place order by time) as leading_measurement, ctid
from test_table
) q
where measurement = leading_measurement
)
, movers as
( insert into test_table_flat (place, time, measurement)
select t.place, t.time, t.measurement
from test_table t
where t.ctid in (select rctid from to_move)
)
delete
from test_table
where ctid in (select rctid from to_move) ;
I am trying to convert a timestamp to date in SQLite.
But it give me always Null back, I try many solution I find out, but any solution works for me
Here is my request :
Thats my SQL script, if you want to try:
CREATE TABLE Appointments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
fromDateTime TIMESTAMP NOT NULL,
toDateTime TIMESTAMP NOT NULL,
name VARCHAR(30) NOT NULL,
description VARCHAR(200),
type VARCHAR(50)
);
Insert INTO Appointments ( fromDateTime, toDateTime, name, description ) VALUES
('21/03/2020 15:00:00', '21/03/2020 16:00:00', 'Test', 'Test Description'),
('22/03/2020 15:00:00', '22/03/2020 16:00:00', 'Test 2', 'Test 2 Description'),
('22/03/2020 16:00:00', '22/03/2020 17:00:00', 'Test 2', 'Test 2 Description'),
('22/03/2020 17:00:00', '22/03/2020 18:00:00', 'Test 2', 'Test 2 Description'),
('21/03/2020 00:00:00', '25/03/2020 23:59:59', 'Test', 'Test Description'),
('27/03/2020 08:00:00', '21/03/2020 12:00:00', 'Test', 'Test Description'),
('02/03/2020 08:00:00', '10/03/2020 12:00:00', 'Joelle', 'Test Joelle');
To expand on #forpas comment, SQLite does not have a TIMESTAMP data type, so when you insert values into your fromDateTime and toDateTime column they are converted to one of SQLite's 5 data storage classes: NULL, INTEGER, REAL, TEXT, BLOB. Since there is no error on INSERT, this gives the impression that SQLite has recognised the value as a timestamp, when in fact the value has just been treated as TEXT. Now to use those values in any of SQLite's Date and Time functions they must either be an ISO-8601 compatible string, the word now, or a number (interpreted as either a Julian day number or a Unix timestamp dependent on the context). So, you need to change your times to YYYY-MM-DD hh:mm:ss format i.e.
Insert INTO Appointments ( fromDateTime, toDateTime, name, description ) VALUES
('2020-03-21 15:00:00', '2020-03-21 16:00:00', 'Test', 'Test Description'),
('2020-03-22 15:00:00', '2020-03-22 16:00:00', 'Test 2', 'Test 2 Description'),
('2020-03-22 16:00:00', '2020-03-22 17:00:00', 'Test 2', 'Test 2 Description'),
('2020-03-22 17:00:00', '2020-03-22 18:00:00', 'Test 2', 'Test 2 Description'),
('2020-03-21 00:00:00', '2020-03-25 23:59:59', 'Test', 'Test Description'),
('2020-03-27 08:00:00', '2020-03-21 12:00:00', 'Test', 'Test Description'),
('2020-03-02 08:00:00', '2020-03-10 12:00:00', 'Joelle', 'Test Joelle');
Note that datetime is simply called with the column as a parameter and returns the string in an ISO-8601 format. To get YYYY-MM-DD format you need to use strftime as well. So your query becomes:
SELECT strftime('%d - %m - %Y', fromDateTime) AS y,
strftime('%Y-%m-%d', fromDateTime) AS x
FROM Appointments
And the output:
y x
21 - 03 - 2020 2020-03-21
22 - 03 - 2020 2020-03-22
22 - 03 - 2020 2020-03-22
22 - 03 - 2020 2020-03-22
21 - 03 - 2020 2020-03-21
27 - 03 - 2020 2020-03-27
02 - 03 - 2020 2020-03-02
Demo on dbfiddle
I want to find the maximum value before the current date but within 1 year of a value using a window function. My attempt is not giving me the correct value and not sure why?
[MaxPrevious] is the desired result
[MaxPrevious2] is the window function result with the wrong value.
I need to use a window function as the final query is more complex but the date condition part is not working.
Desired Output:
Full table data and query:
--DROP TABLE [dbDelete].[dbo].[tblData]
--CREATE TABLE [dbDelete].[dbo].[tblData]
--([Date] datetime, [Part] varchar(10), [Tolerance] float);
--INSERT INTO [dbDelete].[dbo].[tblData] ([Date], [Part], [Tolerance])
--VALUES
--('2012-01-19 00:00:00', 'X1', 6.8),
--('2011-12-15 00:00:00', 'X1', 6.7),
--('2011-10-25 00:00:00', 'X1', 7.8),
--('2010-05-06 00:00:00', 'X1', 8.3),
--('2010-04-13 00:00:00', 'X1', 7.2),
--('2010-01-21 00:00:00', 'X1', 4.7),
--('2009-12-28 00:00:00', 'X1', 6.9),
--('2009-01-01 00:00:00', 'X1', 7.8),
--('2008-11-16 00:00:00', 'X1', 7.4),
--('2008-11-08 00:00:00', 'X1', 7.9),
--('2012-01-19 00:00:00', 'X2', 3.8),
--('2011-12-15 00:00:00', 'X2', 3.7),
--('2011-10-25 00:00:00', 'X2', 4.8),
--('2010-05-06 00:00:00', 'X2', 5.3),
--('2010-04-13 00:00:00', 'X2', 4.2),
--('2010-01-21 00:00:00', 'X2', 1.7),
--('2009-12-28 00:00:00', 'X2', 3.9),
--('2009-01-01 00:00:00', 'X2', 4.8),
--('2008-11-16 00:00:00', 'X2', 4.4),
--('2008-11-08 00:00:00', 'X2', 4.9)
--;
select t1.*
-- Find max before current record but within 1 year
,(select top (1) t2.[Tolerance] from [dbDelete].[dbo].[tblData] t2
where t2.[Date] < t1.[Date]
and t2.[Date] >= dateadd(year, -1, t1.[Date])
and t2.[Part] = t1.[Part]
order by t2.[Tolerance] desc) as [MaxPrevious]
-- Find max before current record but within 1 year
,max(case when t1.[Date] >= dateadd(year, -1, t1.[Date]) then t1.[Tolerance] else 0 end) over
(partition by t1.[Part]
order by t1.[Date]
rows between unbounded preceding and 1 preceding
) as [MaxPrevious2]
from [dbDelete].[dbo].[tblData] t1
order by t1.[Part], t1.[Date] desc
--DROP TABLE [dbDelete].[dbo].[tblData]
--CREATE TABLE [dbDelete].[dbo].[tblData]
--([Date] datetime, [Part] varchar(10), [Tolerance] float);
--INSERT INTO [dbDelete].[dbo].[tblData] ([Date], [Part], [Tolerance])
--VALUES
--('2012-01-19 00:00:00', 'X1', 6.8),
--('2011-12-15 00:00:00', 'X1', 6.7),
--('2011-10-25 00:00:00', 'X1', 7.8),
--('2010-05-06 00:00:00', 'X1', 8.3),
--('2010-04-13 00:00:00', 'X1', 7.2),
--('2010-01-21 00:00:00', 'X1', 4.7),
--('2009-12-28 00:00:00', 'X1', 6.9),
--('2009-01-01 00:00:00', 'X1', 7.8),
--('2008-11-16 00:00:00', 'X1', 7.4),
--('2008-11-08 00:00:00', 'X1', 7.9),
--('2012-01-19 00:00:00', 'X2', 3.8),
--('2011-12-15 00:00:00', 'X2', 3.7),
--('2011-10-25 00:00:00', 'X2', 4.8),
--('2010-05-06 00:00:00', 'X2', 5.3),
--('2010-04-13 00:00:00', 'X2', 4.2),
--('2010-01-21 00:00:00', 'X2', 1.7),
--('2009-12-28 00:00:00', 'X2', 3.9),
--('2009-01-01 00:00:00', 'X2', 4.8),
--('2008-11-16 00:00:00', 'X2', 4.4),
--('2008-11-08 00:00:00', 'X2', 4.9)
--;
;with cte as (
select DATEADD(year, -1, [Date]) as PrevDate, * from [dbDelete].[dbo].[tblData]
)
select b.[Date], b.Part, b.Tolerance, max(a.Tolerance) as MaxPrevious from cte a
right join cte b
on a.Part = b.Part and a.[Date] >= b.[PrevDate] and a.[Date] < b.[Date]
group by b.[Date], b.Part, b.Tolerance
order by b.[Part], b.[Date] desc
I am not sure if this is doable by using just a window functions.
I am developing an algorithm with Postgres (PL/pgSQL) and I need to calculate the number of working hours between 2 timestamps, taking into account that weekends are not working and the rest of the days are counted only from 8am to 15pm.
Examples:
From Dec 3rd at 14pm to Dec 4th at 9am should count 2 hours:
3rd = 1, 4th = 1
From Dec 3rd at 15pm to Dec 7th at 8am should count 8 hours:
3rd = 0, 4th = 8, 5th = 0, 6th = 0, 7th = 0
It would be great to consider hour fractions as well.
According to your question working hours are: Mo–Fr, 08:00–15:00.
Rounded results
For just two given timestamps
Operating on units of 1 hour. Fractions are ignored, therefore not precise but simple:
SELECT count(*) AS work_hours
FROM generate_series (timestamp '2013-06-24 13:30'
, timestamp '2013-06-24 15:29' - interval '1h'
, interval '1h') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00';
The function generate_series() generates one row if the end is greater than the start and another row for every full given interval (1 hour). This wold count every hour entered into. To ignore fractional hours, subtract 1 hour from the end. And don't count hours starting before 14:00.
Use the field pattern ISODOW instead of DOW for EXTRACT() to simplify expressions. Returns 7 instead of 0 for Sundays.
A simple (and very cheap) cast to time makes it easy to identify qualifying hours.
Fractions of an hour are ignored, even if fractions at begin and end of the interval would add up to an hour or more.
For a whole table
CREATE TABLE t (t_id int PRIMARY KEY, t_start timestamp, t_end timestamp);
INSERT INTO t VALUES
(1, '2009-12-03 14:00', '2009-12-04 09:00')
, (2, '2009-12-03 15:00', '2009-12-07 08:00') -- examples in question
, (3, '2013-06-24 07:00', '2013-06-24 12:00')
, (4, '2013-06-24 12:00', '2013-06-24 23:00')
, (5, '2013-06-23 13:00', '2013-06-25 11:00')
, (6, '2013-06-23 14:01', '2013-06-24 08:59') -- max. fractions at begin and end
;
Query:
SELECT t_id, count(*) AS work_hours
FROM (
SELECT t_id, generate_series (t_start, t_end - interval '1h', interval '1h') AS h
FROM t
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00'
GROUP BY 1
ORDER BY 1;
db<>fiddle here
Old sqlfiddle
More precision
To get more precision you can use smaller time units. 5-minute slices for instance:
SELECT t_id, count(*) * interval '5 min' AS work_interval
FROM (
SELECT t_id, generate_series (t_start, t_end - interval '5 min', interval '5 min') AS h
FROM t
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:55' -- 15.00 - interval '5 min'
GROUP BY 1
ORDER BY 1;
The smaller the unit the higher the cost.
Cleaner with LATERAL in Postgres 9.3+
In combination with the new LATERAL feature in Postgres 9.3, the above query can then be written as:
1-hour precision:
SELECT t.t_id, h.work_hours
FROM t
LEFT JOIN LATERAL (
SELECT count(*) AS work_hours
FROM generate_series (t.t_start, t.t_end - interval '1h', interval '1h') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:00'
) h ON TRUE
ORDER BY 1;
5-minute precision:
SELECT t.t_id, h.work_interval
FROM t
LEFT JOIN LATERAL (
SELECT count(*) * interval '5 min' AS work_interval
FROM generate_series (t.t_start, t.t_end - interval '5 min', interval '5 min') h
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= '08:00'
AND h::time <= '14:55'
) h ON TRUE
ORDER BY 1;
This has the additional advantage that intervals containing zero working hours are not excluded from the result like in the above versions.
More about LATERAL:
Find most common elements in array with a group by
Insert multiple rows in one table based on number in another table
Exact results
Postgres 8.4+
Or you deal with start and end of the time frame separately to get exact results to the microsecond. Makes the query more complex, but cheaper and exact:
WITH var AS (SELECT '08:00'::time AS v_start
, '15:00'::time AS v_end)
SELECT t_id
, COALESCE(h.h, '0') -- add / subtract fractions
- CASE WHEN EXTRACT(ISODOW FROM t_start) < 6
AND t_start::time > v_start
AND t_start::time < v_end
THEN t_start - date_trunc('hour', t_start)
ELSE '0'::interval END
+ CASE WHEN EXTRACT(ISODOW FROM t_end) < 6
AND t_end::time > v_start
AND t_end::time < v_end
THEN t_end - date_trunc('hour', t_end)
ELSE '0'::interval END AS work_interval
FROM t CROSS JOIN var
LEFT JOIN ( -- count full hours, similar to above solutions
SELECT t_id, count(*)::int * interval '1h' AS h
FROM (
SELECT t_id, v_start, v_end
, generate_series (date_trunc('hour', t_start)
, date_trunc('hour', t_end) - interval '1h'
, interval '1h') AS h
FROM t, var
) sub
WHERE EXTRACT(ISODOW FROM h) < 6
AND h::time >= v_start
AND h::time <= v_end - interval '1h'
GROUP BY 1
) h USING (t_id)
ORDER BY 1;
db<>fiddle here
Old sqlfiddle
Postgres 9.2+ with tsrange
The new range types offer a more elegant solution for exact results in combination with the intersection operator *:
Simple function for time ranges spanning only one day:
CREATE OR REPLACE FUNCTION f_worktime_1day(_start timestamp, _end timestamp)
RETURNS interval
LANGUAGE sql IMMUTABLE AS
$func$ -- _start & _end within one calendar day! - you may want to check ...
SELECT CASE WHEN extract(ISODOW from _start) < 6 THEN (
SELECT COALESCE(upper(h) - lower(h), '0')
FROM (
SELECT tsrange '[2000-1-1 08:00, 2000-1-1 15:00)' -- hours hard coded
* tsrange( '2000-1-1'::date + _start::time
, '2000-1-1'::date + _end::time ) AS h
) sub
) ELSE '0' END
$func$;
If your ranges never span multiple days, that's all you need.
Else, use this wrapper function to deal with any interval:
CREATE OR REPLACE FUNCTION f_worktime(_start timestamp
, _end timestamp
, OUT work_time interval)
LANGUAGE plpgsql IMMUTABLE AS
$func$
BEGIN
CASE _end::date - _start::date -- spanning how many days?
WHEN 0 THEN -- all in one calendar day
work_time := f_worktime_1day(_start, _end);
WHEN 1 THEN -- wrap around midnight once
work_time := f_worktime_1day(_start, NULL)
+ f_worktime_1day(_end::date, _end);
ELSE -- multiple days
work_time := f_worktime_1day(_start, NULL)
+ f_worktime_1day(_end::date, _end)
+ (SELECT count(*) * interval '7:00' -- workday hard coded!
FROM generate_series(_start::date + 1
, _end::date - 1, '1 day') AS t
WHERE extract(ISODOW from t) < 6);
END CASE;
END
$func$;
Call:
SELECT t_id, f_worktime(t_start, t_end) AS worktime
FROM t
ORDER BY 1;
db<>fiddle here
Old sqlfiddle
How about this: create a small table with 24*7 rows, one row for each hour in a week.
CREATE TABLE hours (
hour timestamp not null,
is_working boolean not null
);
INSERT INTO hours (hour, is_working) VALUES
('2009-11-2 00:00:00', false),
('2009-11-2 01:00:00', false),
. . .
('2009-11-2 08:00:00', true),
. . .
('2009-11-2 15:00:00', true),
('2009-11-2 16:00:00', false),
. . .
('2009-11-2 23:00:00', false);
Likewise add 24 rows for each of the other days. It doesn't matter what year or month you give, as you'll see in a moment. You just need to represent all seven days of the week.
SELECT t.id, t.start, t.end, SUM(CASE WHEN h.is_working THEN 1 ELSE 0 END) AS hours_worked
FROM mytable t JOIN hours h
ON (EXTRACT(DOW FROM TIMESTAMP h.hour) BETWEEN EXTRACT(DOW FROM TIMESTAMP t.start)
AND EXTRACT(DOW FROM TIMESTAMP t.end))
AND (EXTRACT(DOW FROM TIMESTAMP h.hour) > EXTRACT(DOW FROM TIMESTAMP t.start)
OR EXTRACT(HOUR FROM TIMESTAMP h.hour) >= EXTRACT(HOUR FROM TIMESTAMP t.start))
AND (EXTRACT(DOW FROM TIMESTAMP h.hour) < EXTRACT(DOW FROM TIMESTAMP t.end)
OR EXTRACT(HOUR FROM TIMESTAMP h.hour) <= EXTRACT(HOUR FROM TIMESTAMP t.end))
GROUP BY t.id, t.start, t.end;
This following functions will take the input for the
working start time of the day
working end time of the day
start time
end time
-- helper function
CREATE OR REPLACE FUNCTION get_working_time_in_a_day(sdt TIMESTAMP, edt TIMESTAMP, swt TIME, ewt TIME) RETURNS INT AS
$$
DECLARE
sd TIMESTAMP; ed TIMESTAMP; swdt TIMESTAMP; ewdt TIMESTAMP; seconds INT;
BEGIN
swdt = sdt::DATE || ' ' || swt; -- work start datetime for a day
ewdt = sdt::DATE || ' ' || ewt; -- work end datetime for a day
IF (sdt < swdt AND edt <= swdt) -- case 1 and 2
THEN
seconds = 0;
END IF;
IF (sdt < swdt AND edt > swdt AND edt <= ewdt) -- case 3 and 4
THEN
seconds = EXTRACT(EPOCH FROM (edt - swdt));
END IF;
IF (sdt < swdt AND edt > swdt AND edt > ewdt) -- case 5
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - swdt));
END IF;
IF (sdt = swdt AND edt > swdt AND edt <= ewdt) -- case 6 and 7
THEN
seconds = EXTRACT(EPOCH FROM (edt - sdt));
END IF;
IF (sdt = swdt AND edt > ewdt) -- case 8
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - sdt));
END IF;
IF (sdt > swdt AND edt <= ewdt) -- case 9 and 10
THEN
seconds = EXTRACT(EPOCH FROM (edt - sdt));
END IF;
IF (sdt > swdt AND sdt < ewdt AND edt > ewdt) -- case 11
THEN
seconds = EXTRACT(EPOCH FROM (ewdt - sdt));
END IF;
IF (sdt >= ewdt AND edt > ewdt) -- case 12 and 13
THEN
seconds = 0;
END IF;
RETURN seconds;
END;
$$
LANGUAGE plpgsql;
-- Get work time difference
CREATE OR REPLACE FUNCTION get_working_time(sdt TIMESTAMP, edt TIMESTAMP, swt TIME, ewt TIME) RETURNS INT AS
$$
DECLARE
seconds INT = 0;
strst VARCHAR(9) = ' 00:00:00';
stret VARCHAR(9) = ' 23:59:59';
tend TIMESTAMP; tempEdt TIMESTAMP;
x int;
BEGIN
<<test>>
WHILE sdt <= edt LOOP
tend = sdt::DATE || stret; -- get the false end datetime for start time
IF edt >= tend
THEN
tempEdt = tend;
ELSE
tempEdt = edt;
END IF;
-- skip saturday and sunday
x = EXTRACT(DOW FROM sdt);
if (x > 0 AND x < 6)
THEN
seconds = seconds + get_working_time_in_a_day(sdt, tempEdt, swt, ewt);
ELSE
-- RAISE NOTICE 'MISSED A DAY';
END IF;
sdt = (sdt + (INTERVAL '1 DAY'))::DATE || strst;
END LOOP test;
--RAISE NOTICE 'diff in minutes = %', (seconds / 60);
RETURN seconds;
END;
$$
LANGUAGE plpgsql;
-- Table Definition
DROP TABLE IF EXISTS test_working_time;
CREATE TABLE test_working_time(
pk SERIAL PRIMARY KEY,
start_datetime TIMESTAMP,
end_datetime TIMESTAMP,
start_work_time TIME,
end_work_time TIME
);
-- Test data insertion
INSERT INTO test_working_time VALUES
(1, '2015-11-03 01:00:00', '2015-11-03 07:00:00', '08:00:00', '22:00:00'),
(2, '2015-11-03 01:00:00', '2015-11-04 07:00:00', '08:00:00', '22:00:00'),
(3, '2015-11-03 01:00:00', '2015-11-05 07:00:00', '08:00:00', '22:00:00'),
(4, '2015-11-03 01:00:00', '2015-11-06 07:00:00', '08:00:00', '22:00:00'),
(5, '2015-11-03 01:00:00', '2015-11-07 07:00:00', '08:00:00', '22:00:00'),
(6, '2015-11-03 01:00:00', '2015-11-03 08:00:00', '08:00:00', '22:00:00'),
(7, '2015-11-03 01:00:00', '2015-11-04 08:00:00', '08:00:00', '22:00:00'),
(8, '2015-11-03 01:00:00', '2015-11-05 08:00:00', '08:00:00', '22:00:00'),
(9, '2015-11-03 01:00:00', '2015-11-06 08:00:00', '08:00:00', '22:00:00'),
(10, '2015-11-03 01:00:00', '2015-11-07 08:00:00', '08:00:00', '22:00:00'),
(11, '2015-11-03 01:00:00', '2015-11-03 11:00:00', '08:00:00', '22:00:00'),
(12, '2015-11-03 01:00:00', '2015-11-04 11:00:00', '08:00:00', '22:00:00'),
(13, '2015-11-03 01:00:00', '2015-11-05 11:00:00', '08:00:00', '22:00:00'),
(14, '2015-11-03 01:00:00', '2015-11-06 11:00:00', '08:00:00', '22:00:00'),
(15, '2015-11-03 01:00:00', '2015-11-07 11:00:00', '08:00:00', '22:00:00'),
(16, '2015-11-03 01:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(17, '2015-11-03 01:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(18, '2015-11-03 01:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(19, '2015-11-03 01:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(20, '2015-11-03 01:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(21, '2015-11-03 01:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(22, '2015-11-03 01:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(23, '2015-11-03 01:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(24, '2015-11-03 01:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(25, '2015-11-03 01:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(26, '2015-11-03 08:00:00', '2015-11-03 11:00:00', '08:00:00', '22:00:00'),
(27, '2015-11-03 08:00:00', '2015-11-04 11:00:00', '08:00:00', '22:00:00'),
(28, '2015-11-03 08:00:00', '2015-11-05 11:00:00', '08:00:00', '22:00:00'),
(29, '2015-11-03 08:00:00', '2015-11-06 11:00:00', '08:00:00', '22:00:00'),
(30, '2015-11-03 08:00:00', '2015-11-07 11:00:00', '08:00:00', '22:00:00'),
(31, '2015-11-03 08:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(32, '2015-11-03 08:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(33, '2015-11-03 08:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(34, '2015-11-03 08:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(35, '2015-11-03 08:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(36, '2015-11-03 08:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(37, '2015-11-03 08:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(38, '2015-11-03 08:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(39, '2015-11-03 08:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(40, '2015-11-03 08:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(41, '2015-11-03 12:00:00', '2015-11-03 18:00:00', '08:00:00', '22:00:00'),
(42, '2015-11-03 12:00:00', '2015-11-04 18:00:00', '08:00:00', '22:00:00'),
(43, '2015-11-03 12:00:00', '2015-11-05 18:00:00', '08:00:00', '22:00:00'),
(44, '2015-11-03 12:00:00', '2015-11-06 18:00:00', '08:00:00', '22:00:00'),
(45, '2015-11-03 12:00:00', '2015-11-07 18:00:00', '08:00:00', '22:00:00'),
(46, '2015-11-03 12:00:00', '2015-11-03 22:00:00', '08:00:00', '22:00:00'),
(47, '2015-11-03 12:00:00', '2015-11-04 22:00:00', '08:00:00', '22:00:00'),
(48, '2015-11-03 12:00:00', '2015-11-05 22:00:00', '08:00:00', '22:00:00'),
(49, '2015-11-03 12:00:00', '2015-11-06 22:00:00', '08:00:00', '22:00:00'),
(50, '2015-11-03 12:00:00', '2015-11-07 22:00:00', '08:00:00', '22:00:00'),
(51, '2015-11-03 12:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(52, '2015-11-03 12:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(53, '2015-11-03 12:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(54, '2015-11-03 12:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(55, '2015-11-03 12:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(56, '2015-11-03 22:00:00', '2015-11-03 23:00:00', '08:00:00', '22:00:00'),
(57, '2015-11-03 22:00:00', '2015-11-04 23:00:00', '08:00:00', '22:00:00'),
(58, '2015-11-03 22:00:00', '2015-11-05 23:00:00', '08:00:00', '22:00:00'),
(59, '2015-11-03 22:00:00', '2015-11-06 23:00:00', '08:00:00', '22:00:00'),
(60, '2015-11-03 22:00:00', '2015-11-07 23:00:00', '08:00:00', '22:00:00'),
(61, '2015-11-03 22:30:00', '2015-11-03 23:30:00', '08:00:00', '22:00:00'),
(62, '2015-11-03 22:30:00', '2015-11-04 23:30:00', '08:00:00', '22:00:00'),
(63, '2015-11-03 22:30:00', '2015-11-05 23:30:00', '08:00:00', '22:00:00'),
(64, '2015-11-03 22:30:00', '2015-11-06 23:30:00', '08:00:00', '22:00:00'),
(65, '2015-11-03 22:30:00', '2015-11-07 23:30:00', '08:00:00', '22:00:00');
-- select query to get work time difference
SELECT
start_datetime,
end_datetime,
start_work_time,
end_work_time,
get_working_time(start_datetime, end_datetime, start_work_time, end_work_time) AS diff_in_minutes
FROM
test_working_time;
This will give the difference of only the work hours in seconds between the start and end datetime