Moving Average based on Timestamps in PostgreSQL - sql

I wanted to perform moving average through timestamps.
I have two columns: Temperature and timestamps (time-date) and I want to perform the moving average based on every 15 minutes successive temperature observations. In other words, selecting data to perform the average based on 15 minutes time interval. Moreover, it is possible to have different number of observations for different time sequences. I meant all the window sizes are equal (15 minutes) but it is possible to have different number of observations in each window.
For example:
For a first window we have to calculate the average of n observation and for second window calculate the average of the observation for n+5 observation.
Data Sample:
ID Timestamps Temperature
1 2007-09-14 22:56:12 5.39
2 2007-09-14 22:58:12 5.34
3 2007-09-14 23:00:12 5.16
4 2007-09-14 23:02:12 5.54
5 2007-09-14 23:04:12 5.30
6 2007-09-14 23:06:12 5.20
7 2007-09-14 23:10:12 5.39
8 2007-09-14 23:12:12 5.34
9 2007-09-14 23:20:12 5.16
10 2007-09-14 23:24:12 5.54
11 2007-09-14 23:30:12 5.30
12 2007-09-14 23:33:12 5.20
13 2007-09-14 23:40:12 5.39
14 2007-09-14 23:42:12 5.34
15 2007-09-14 23:44:12 5.16
16 2007-09-14 23:50:12 5.54
17 2007-09-14 23:52:12 5.30
18 2007-09-14 23:57:12 5.20
Main Challenges:
How I can learn the code to discriminate every 15 minute while there are not exact 15 minutes time intervals due to different sampling frequency.

You can join your table with itself:
select l1.id, avg( l2.Temperature )
from l l1
inner join l l2
on l2.id <= l1.id and
l2.Timestamps + interval '15 minutes' > l1.Timestamps
group by l1.id
order by id
;
Results:
| ID | AVG |
-----------------------
| 1 | 5.39 |
| 2 | 5.365 |
| 3 | 5.296666666667 |
| 4 | 5.3575 |
| 5 | 5.346 |
| 6 | 5.321666666667 |
| 7 | 5.331428571429 |
Notice: Only 'hard work' is made. You should join result with original table or append new columns to query. I don't know your final query needed. Adapt this solution or ask for more help.

Assuming you want to restart the rolling average after each 15 minute interval:
select id,
temp,
avg(temp) over (partition by group_nr order by time_read) as rolling_avg
from (
select id,
temp,
time_read,
interval_group,
id - row_number() over (partition by interval_group order by time_read) as group_nr
from (
select id,
time_read,
'epoch'::timestamp + '900 seconds'::interval * (extract(epoch from time_read)::int4 / 900) as interval_group,
temp
from readings
) t1
) t2
order by time_read;
It is based on Depesz's solution to group by "time ranges":
Here is an SQLFiddle example: http://sqlfiddle.com/#!1/0f3f0/2

Here's an approach that utilises the facility to use an aggregation function as a window function. The aggregate function keeps the last 15 minutes' worth of observations in an array, along with the current running total. The state transition function shifts elements off the array that have fallen behind the 15-minute window, and pushes on the latest observation. The final function simply computes the mean temperature in the array.
Now, as to whether this is a benefit or not... it depends. It focuses on the plgpsql-execution part of postgresql rather than database-access part, and my own experiences is that plpgsql is not fast. If you can easily do lookups back to the table to find the previous 15 minutes' rows for each observation, a self-join (as in #danihp answer) will do well. However, this approach can deal with the observations coming from some more complex source, where those lookups aren't practical. As ever, trial and compare on your own system.
-- based on using this table definition
create table observation(id int primary key, timestamps timestamp not null unique,
temperature numeric(5,2) not null);
-- note that I'm reusing the table structure as a type for the state here
create type rollavg_state as (memory observation[], total numeric(5,2));
create function rollavg_func(state rollavg_state, next_in observation) returns rollavg_state immutable language plpgsql as $$
declare
cutoff timestamp;
i int;
updated_memory observation[];
begin
raise debug 'rollavg_func: state=%, next_in=%', state, next_in;
cutoff := next_in.timestamps - '15 minutes'::interval;
i := array_lower(state.memory, 1);
raise debug 'cutoff is %', cutoff;
while i <= array_upper(state.memory, 1) and state.memory[i].timestamps < cutoff loop
raise debug 'shifting %', state.memory[i].timestamps;
i := i + 1;
state.total := state.total - state.memory[i].temperature;
end loop;
state.memory := array_append(state.memory[i:array_upper(state.memory, 1)], next_in);
state.total := coalesce(state.total, 0) + next_in.temperature;
return state;
end
$$;
create function rollavg_output(state rollavg_state) returns float8 immutable language plpgsql as $$
begin
raise debug 'rollavg_output: state=% len=%', state, array_length(state.memory, 1);
if array_length(state.memory, 1) > 0 then
return state.total / array_length(state.memory, 1);
else
return null;
end if;
end
$$;
create aggregate rollavg(observation) (sfunc = rollavg_func, finalfunc = rollavg_output, stype = rollavg_state);
-- referring to just a table name means a tuple value of the row as a whole, whose type is the table type
-- the aggregate relies on inputs arriving in ascending timestamp order
select rollavg(observation) over (order by timestamps) from observation;

Based on dani herrera's answer:
select l1.id,
l1.time_read,
l1.temp ,
avg( l2.Temp ) as rolling_avg
from readings l1
inner join readings l2
on l2.id <= l1.id and
l2.time_read + interval '15 minutes' > l1.time_read
group by l1.id
order by time_read;
Here is an SQLFiddle: http://sqlfiddle.com/#!17/9db74/161 and the data in a chart would look like this:

Related

SQL/Power BI How to expand table according dates

I have a table like below, where a new record is created when there is a change in the status of a task.
task
status
last update
A
1
28/04/2022
A
3
01/05/2022
A
5
05/05/2022
B
1
28/04/2022
B
3
03/05/2022
B
4
05/05/2022
The problem is that I need to plot a graph within a time range, where I know the status of each item regardless of the date it was changed/created. With that, I think the easiest is to transform to the table below:
task
status
last update
A
1
28/04/2022
A
1
29/04/2022
A
1
28/04/2022
A
1
29/04/2022
A
1
30/04/2022
A
3
01/05/2022
A
3
02/05/2022
A
3
03/05/2022
A
3
04/05/2022
A
5
05/05/2022
B
1
28/04/2022
B
1
29/04/2022
B
1
30/04/2022
B
1
01/05/2022
B
1
02/05/2022
B
3
03/05/2022
B
3
04/05/2022
B
4
05/05/2022
However, I can't think of a way to do it, either directly in Power BI or even in SQL, since I'm connecting to a redshift database through a sql query.
Could you please help me?
Thanks
You can create the below visual using the standard line chart visualization. In the visualization settings, go to the "Shapes" menu and turn the "Stepped" view on.
While not necessary, it may be best practice to create a date dimension table with daily values spanning from the minimum update date to the maximum update date.
Dates = CALENDAR(MIN(Tasks[last update]),MAX(Tasks[last update]))
You can then create a one to many relationship between Dates and Tasks.
demo
very similar question: How to do forward fill as a PL/PGSQL function
I don't know the actual differences between amazon redshift and postgresql.
The demo is based on postgresql 14. It may not works on redshift.
Basic idea:for every distinct task, get the max, min last_updated date then use the generate_series function to expand the date based on task, task's min & max last_update. key point is first_value(status), because the once you expand the date, then obviously some date the status value is null, then use partition to fill the gap. If you want deep more, you can read manual: https://www.postgresql.org/docs/14/plpgsql.html
CREATE OR REPLACE FUNCTION test_expand ()
RETURNS TABLE (
_date1 date,
_first_ctask text,
_first_cstatus bigint
)
AS $$
DECLARE
distinct_task record;
max_last_update date;
min_last_update date;
_sql text;
BEGIN
FOR distinct_task IN SELECT DISTINCT
task
FROM
test_1
ORDER BY
1 LOOP
min_last_update := (
SELECT
min(last_update)
FROM
test_1
WHERE
task = distinct_task.task
LIMIT 1);
max_last_update := (
SELECT
max(last_update)
FROM
test_1
WHERE
task = distinct_task.task
LIMIT 1);
_sql := format($dml$ WITH cte AS (
SELECT
date1::date, $task$ % s$task $ AS _task, status, count(status) OVER (ORDER BY date1) AS c_s FROM (
SELECT
generate_series($a$ % s$a $::date, $b$ % s$b $::date, interval '1 day')) g (date1)
LEFT JOIN test_1 ON date1 = last_update)
SELECT
date1, _task, first_value(status) OVER (PARTITION BY c_s ORDER BY date1, status)
FROM cte $dml$, distinct_task.task, min_last_update, max_last_update);
RETURN query EXECUTE _sql;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

Group data series into variable width windows based on first event

I have computational task which can be reduced to the follow problem:
I have a large set of pairs of integers (key, val) which I want to group into windows. The first window starts with the first pair p ordered by key attribute and spans all the pairs where p[i].key belongs to [p[0].key; p[0].key + N), with some arbitrary integer N, positive and common to all windows.
The next window starts with the first pair ordered by key not included in the previous windows and again spans all the pairs from its key to key + N, and so on for the following windows.
The last step is to sum second attribute for each window and display it together with the first key of the window.
For example, given list of records with values:
key
val
1
3
2
7
5
1
6
4
7
1
10
3
13
5
and N=3, the windows would be:
{(1,3),(2,7)},
{(5,1),(6,4),(7,1)},
{(10,3)}
{(13,5)}
The final result:
key
sum_of_values
1
10
5
6
10
3
13
5
This is easy to program with a standard programming language but I have no clue how to solve this with SQL.
Note: If clickhouse doesn't support the RECURSIVE keyword, just remove that keyword from the expression.
Clickhouse seems to use non-standard syntax for the WITH clause. The below uses standard SQL. Adjust as needed.
Sorry. clickhouse may not support this approach. If not, we would need to find another method of walking through the data.
Standard SQL:
There are a few ways. Here's one approach. First assign row numbers to allow recursively stepping through the rows. We could use LEAD as well.
Assign a group (key value) to each row based on the current key and the last group/key value and whether they are within some distance (N = 3, in this case).
The last step is to just SUM these values per group start_key and to use the start_key value as the starting key in each group.
WITH RECURSIVE nrows (xkey, val, n) AS (
SELECT xkey, val, ROW_NUMBER() OVER (ORDER BY xkey) FROM test
)
, cte (xkey, val, n, start_key) AS (
SELECT xkey, val, n, xkey FROM nrows WHERE n = 1
UNION ALL
SELECT t1.xkey, t1.val, t1.n
, CASE WHEN t1.xkey <= t2.start_key + (3-1) THEN t2.start_key ELSE t1.xkey END
FROM nrows AS t1
JOIN cte AS t2
ON t2.n = t1.n-1
)
SELECT start_key
, SUM(val) AS sum_values
FROM cte
GROUP BY start_key
ORDER BY start_key
;
Result:
+-----------+------------+
| start_key | sum_values |
+-----------+------------+
| 1 | 10 |
| 5 | 6 |
| 10 | 3 |
| 13 | 5 |
+-----------+------------+

BigQuery full table to partition

I have a 340 GB of data in one table (270 days worth of data). Now planning move this data to partition table.
That means I will have 270 partitions. What is the best way to move this data to partition table.
I dont want to run 270 queries which is very costly operation. So looking for optimized solution.
I have multiple tables like this. I need to migrate all these tables to partition tables.
Thanks,
I see three options
Direct Extraction out of original table:
Actions (how many queries to run) = Days [to extract] = 270
Full Scans (how much data scanned measured in full scans of original table) = Days = 270
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 270 = $459.00
Hierarchical(recursive) Extraction: (described in Mosha’s answer)
Actions = 2^log2(Days) – 2 = 510
Full Scans = 2*log2(Days) = 18
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 18 = $30.60
Clustered Extraction: (I will describe it in a sec)
Actions = Days + 1 = 271
Full Scans = [always]2 = 2
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 2 = $3.40
Summary
Method Actions Total Full Scans Total Cost
Direct Extraction 270 270 $459.00
Hierarchical(recursive) Extraction 510 18 $30.60
Clustered Extraction 271 2 $3.40
Definitely, for most practical purposes Mosha’s solution is way to go (I use it in most such cases)
It is relatively simple and straightforward
Even though you need to run query 510 times – the query is "relatively" simple and orchestration logic is simple to implement with whatever client you usually use
And cost save is quite visible!
From $460 down to $31!
Almost 15 times down!
In case if you -
a) want to lower cost even further for yet another 9 times (so it will be total x135 times lower)
b) and like having fun and more challenges
- take a look at third option
“Clustered Extraction” Explanation
Idea / Goal:
Step 1
We want to transform original table into another [single] table with 270 columns – one column for one day
Each column will hold one serialized row for respective day from original table
Total number of rows in this new table will be equal to number of rows for most "heavy" day
This will require just one query (see example below) with one full scan
Step 2
After this new table is ready – we will be extracting day-by-day querying ONLY respective column and write into final daily table (schema of daily table are the very same as original table’s schema and all those tables could be pre-created)
This will require 270 queries to be run with scans approximately equivalent (this really depends on how complex your schema, so can vary) to one full size of original table
While querying column – we will need to de-serialize row’s value and parse it back to original scheme
Very simplified example: (using BigQuery Standard SQL here)
The purpose of this example is just to give direction if you will find idea interesting for you
Serialization / de-serialization is extremely simplified to keep focus on idea and less on particular implementation which can be different from case to case (mostly depends on schema)
So, assume original table (theTable) looks somehow like below
SELECT 1 AS id, "101" AS x, 1 AS ts UNION ALL
SELECT 2 AS id, "102" AS x, 1 AS ts UNION ALL
SELECT 3 AS id, "103" AS x, 1 AS ts UNION ALL
SELECT 4 AS id, "104" AS x, 1 AS ts UNION ALL
SELECT 5 AS id, "105" AS x, 1 AS ts UNION ALL
SELECT 6 AS id, "106" AS x, 2 AS ts UNION ALL
SELECT 7 AS id, "107" AS x, 2 AS ts UNION ALL
SELECT 8 AS id, "108" AS x, 2 AS ts UNION ALL
SELECT 9 AS id, "109" AS x, 2 AS ts UNION ALL
SELECT 10 AS id, "110" AS x, 3 AS ts UNION ALL
SELECT 11 AS id, "111" AS x, 3 AS ts UNION ALL
SELECT 12 AS id, "112" AS x, 3 AS ts UNION ALL
SELECT 13 AS id, "113" AS x, 3 AS ts UNION ALL
SELECT 14 AS id, "114" AS x, 3 AS ts UNION ALL
SELECT 15 AS id, "115" AS x, 3 AS ts UNION ALL
SELECT 16 AS id, "116" AS x, 3 AS ts UNION ALL
SELECT 17 AS id, "117" AS x, 3 AS ts UNION ALL
SELECT 18 AS id, "118" AS x, 3 AS ts UNION ALL
SELECT 19 AS id, "119" AS x, 4 AS ts UNION ALL
SELECT 20 AS id, "120" AS x, 4 AS ts
Step 1 – transform table and write result into tempTable
SELECT
num,
MAX(IF(ts=1, ser, NULL)) AS ts_1,
MAX(IF(ts=2, ser, NULL)) AS ts_2,
MAX(IF(ts=3, ser, NULL)) AS ts_3,
MAX(IF(ts=4, ser, NULL)) AS ts_4
FROM (
SELECT
ts,
CONCAT(CAST(id AS STRING), "|", x, "|", CAST(ts AS STRING)) AS ser,
ROW_NUMBER() OVER(PARTITION BY ts ORDER BY id) num
FROM theTable
)
GROUP BY num
tempTable will look like below:
num ts_1 ts_2 ts_3 ts_4
1 1|101|1 6|106|2 10|110|3 19|119|4
2 2|102|1 7|107|2 11|111|3 20|120|4
3 3|103|1 8|108|2 12|112|3 null
4 4|104|1 9|109|2 13|113|3 null
5 5|105|1 null 14|114|3 null
6 null null 15|115|3 null
7 null null 16|116|3 null
8 null null 17|117|3 null
9 null null 18|118|3 null
Here, I am using simple concatenation for serialization
Step 2 – extracting rows for specific day and write output to respective daily table
Please note: In below example - we extracting rows for ts = 2 : this corresponds to column ts_2
SELECT
r[OFFSET(0)] AS id,
r[OFFSET(1)] AS x,
r[OFFSET(2)] AS ts
FROM (
SELECT SPLIT(ts_2, "|") AS r
FROM tempTable
WHERE NOT ts_2 IS NULL
)
The result will look like below (which is expected):
id x ts
6 106 2
7 107 2
8 108 2
9 109 2
I wish I had more time for this to write down, so don’t judge to heavy if something missing – this is more directional answer - but at the same time example is pretty reasonable and if you have plain simple schema – almost no extra thinking is required. Of course with records, nested stuff in schema - most challenging part is serialization / de-serialization – but that’s where fun is – along with extra $saving
I will add another fourth option to #Mikhail's answer
DML QUERY
Action = 1 query to run
Full scans = 1
Cost = $5 x 0.34 = 1.7$ (x270 times cheaper than solution #1 \o/)
With the new DML feature of BiQuery you can convert a none partitioned table to a partitioned one while doing only one full scan of the source table
To illustrate my solution I will use one of BQ's public tables, namely bigquery-public-data:hacker_news.comments. below is the tables schema
name | type | description
_________________________________
id | INTGER | ...
_________________________________
by | STRING | ...
_________________________________
author | STRING | ...
_________________________________
... | |
_________________________________
time_ts | TIMESTAMP | human readable timestamp in UTC YYYY-MM-DD hh:mm:ss /!\ /!\ /!\
_________________________________
... | |
_________________________________
We are going to partition the comments table based on time_ts
#standardSQL
CREATE TABLE my_dataset.comments_partitioned
PARTITION BY DATE(time_ts)
AS
SELECT *
FROM `bigquery-public-data:hacker_news.comments`
I hope it helps :)
If your data was in sharded tables (i.e. with YYYYmmdd suffix), you could've used "bq partition" command. But with data in a single table - you will have to scan it multiple times applying different WHERE clauses on your partition key column.
The only optimization I can think of is to do it hierarchically, i.e. instead of 270 queries which will do 270 full table scans - first split table in half, then each half in half etc. This way you will need to pay for 2*log_2(270) = 2*9 = 18 full scans.
Once the conversion is done - all the temporary tables can be deleted to eliminate extra storage costs.

How to consolidate blocks of time?

I have a derived table with a list of relative seconds to a foreign key (ID):
CREATE TABLE Times (
ID INT
, TimeFrom INT
, TimeTo INT
);
The table contains mostly non-overlapping data, but there are occasions where I have a TimeTo < TimeFrom of another record:
+----+----------+--------+
| ID | TimeFrom | TimeTo |
+----+----------+--------+
| 10 | 10 | 30 |
| 10 | 50 | 70 |
| 10 | 60 | 150 |
| 10 | 75 | 150 |
| .. | ... | ... |
+----+----------+--------+
The result set is meant to be a flattened linear idle report, but with too many of these overlaps, I end up with negative time in use. I.e. If the window above for ID = 10 was 150 seconds long, and I summed the differences of relative seconds to subtract from the window size, I'd wind up with 150-(20+20+90+75)=-55. This approach I've tried, and is what led me to realizing there were overlaps that needed to be flattened.
So, what I'm looking for is a solution to flatten the overlaps into one set of times:
+----+----------+--------+
| ID | TimeFrom | TimeTo |
+----+----------+--------+
| 10 | 10 | 30 |
| 10 | 50 | 150 |
| .. | ... | ... |
+----+----------+--------+
Considerations: Performance is very important here, as this is part of a larger query that will perform well on it's own, and I'd rather not impact its performance much if I can help it.
On a comment regarding "Which seconds have an interval", this is something I have tried for the end result, and am looking for something with better performance. Adapted to my example:
SELECT SUM(C.N)
FROM (
SELECT A.N, ROW_NUMBER()OVER(ORDER BY A.N) RowID
FROM
(SELECT TOP 60 1 N FROM master..spt_values) A
, (SELECT TOP 720 1 N FROM master..spt_values) B
) C
WHERE EXISTS (
SELECT 1
FROM Times SE
WHERE SE.ID = 10
AND SE.TimeFrom <= C.RowID
AND SE.TimeTo >= C.RowID
AND EXISTS (
SELECT 1
FROM Times2 D
WHERE ID = SE.ID
AND D.TimeFrom <= C.RowID
AND D.TimeTo >= C.RowID
)
GROUP BY SE.ID
)
The problem I have with this solution is I have get a Row Count Spool out of the EXISTS query in the query plan with a number of executions equal to COUNT(C.*). I left the real numbers in that query to illustrate that getting around this approach is for the best. Because even with a Row Count Spool reducing the cost of the query by quite a bit, it's execution count increases the cost of the query as a whole by quite a bit as well.
Further Edit: The end goal is to put this in a procedure, so Table Variables and Temp Tables are also a possible tool to use.
OK. I'm still trying to do this with just one SELECT. But This totally works:
DECLARE #tmp TABLE (ID INT, GroupId INT, TimeFrom INT, TimeTo INT)
INSERT INTO #tmp
SELECT ID, 0, TimeFrom, TimeTo
FROM Times
ORDER BY Id, TimeFrom
DECLARE #timeTo int, #id int, #groupId int
SET #groupId = 0
UPDATE #tmp
SET
#groupId = CASE WHEN id != #id THEN 0
WHEN TimeFrom > #timeTo THEN #groupId + 1
ELSE #groupId END,
GroupId = #groupId,
#timeTo = TimeTo,
#id = id
SELECT Id, MIN(TimeFrom), Max(TimeTo) FROM #tmp
GROUP BY ID, GroupId ORDER BY ID
Left join each row to its successor overlapping row on the same ID value (where such exist).
Now for each row in the result-set of LHS left join RHS the contribution to the elapsed time for the ID is:
isnull(RHS.TimeFrom,LHS.TimeTo) - LHS.TimeFrom as TimeElapsed
Summing these by ID should give you the correct answer.
Note that:
- where there isn't an overlapping successor row the calculation is simply
LHS.TimeTo - LHS.TimeFrom
- where there is an overlapping successor row the calculation will net to
(RHS.TimeFrom - LHS.TimeFrom) + (RHS.TimeTo - RHS.TimeFrom)
which simplifies to
RHS.TimeTo - LHS.TimeFrom
What about something like below (assumes SQL 2008+ due to CTE):
WITH Overlaps
AS
(
SELECT t1.Id,
TimeFrom = MIN(t1.TimeFrom),
TimeTo = MAX(t2.TimeTo)
FROM dbo.Times t1
INNER JOIN dbo.Times t2 ON t2.Id = t1.Id
AND t2.TimeFrom > t1.TimeFrom
AND t2.TimeFrom < t1.TimeTo
GROUP BY t1.Id
)
SELECT o.Id,
o.TimeFrom,
o.TimeTo
FROM Overlaps o
UNION ALL
SELECT t.Id,
t.TimeFrom,
t.TimeTo
FROM dbo.Times t
INNER JOIN Overlaps o ON o.Id = t.Id
AND (o.TimeFrom > t.TimeFrom OR o.TimeTo < t.TimeTo);
I do not have a lot of data to test with but seems decent on the smaller data sets I have.
I also wrapped by head around this issue - and afterall I found, that the problem is your data.
You claim (if i get that right), that these entries should reflect the relative times, when a user goes idle / comes back.
So, you should consider to sanitize your data and refactor your inserts to produce valid data sets.
For instance, the two lines:
+----+----------+--------+
| ID | TimeFrom | TimeTo |
+----+----------+--------+
| 10 | 50 | 70 |
| 10 | 60 | 150 |
how can it be possible that a user is idle until second 70, but goes idle on second 60? This already implies, that he has been back latest at around second 59.
I can only assume that this issue comes from different threads and/or browser windows (tabs) a user might be using your application with. (Each having it's own "idle detection")
So instead of working-around the symptoms - you should fix the cause! Why is this data entry inserted into the table? You could avoid this by simple checking, if the user is already idle before inserting a new row.
Create a unique key constraint on ID and TimeTo
Whenever an idle-event is detected, execute the following query:
INSERT IGNORE INTO Times (ID,TimeFrom,TimeTo)VALUES('10', currentTimeStamp, -1);
-- (If the user is already "idle" - nothing will happen)
Whenever an comeback-event is detected, execute the following query:
UPDATE Times SET TimeTo=currentTimeStamp WHERE ID='10' and TimeTo=-1
-- (If the user is already "back" - nothing will happen)
The fiddle linked here: http://sqlfiddle.com/#!2/dcb17/1 would reproduce the chain of events for your example, but resulting in a clean and logical set of idle-windows:
ID TIMEFROM TIMETO
10 10 30
10 50 70
10 75 150
Note: The Output is slightly different from the output you desired. But I feel that this is more accurate, cause of the reason outlined above: A user cannot go idle on second 70 without returning from it's current idle state before. He either STAYS idle (and a second thread/tab runs into the idle-event) Or he returned in between.
Especially for your need to maximize performance, you should fix the data and not invent a work-around-query. This is maybe 3 ms upon inserts, but could be worth 20 seconds upon select!
Edit: if Multi-Threading / Multiple-Sessions is the cause for the wrong insert, you would also need to implement a check, if most_recent_come_back_time < now() - idleTimeout - otherwhise a user might comeback on tab1, and is recorded idle on tab2 after a few seconds, cause tab2 did run into it's idle timeout, cause the user only refreshed tab1.
I had the 'same' problem once with 'days' (additionaly without counting WE and Holidays)
The word counting gave me the following idea:
create table Seconds ( sec INT);
insert into Seconds values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9), ...
select count(distinct sec) from times t, seconds s
where s.sec between t.timefrom and t.timeto-1
and id=10;
you can cut the start to 0 (I put the '10' here in braces)
select count(distinct sec) from times t, seconds s
where s.sec between t.timefrom- (10) and t.timeto- (10)-1
and id=10;
and finaly
select count(distinct sec) from times t, seconds s,
(select min(timefrom) m from times where id=10) as m
where s.sec between t.timefrom-m.m and t.timeto-m.m-1
and id=10;
additionaly you can "ignore" eg. 10 seconds by dividing you loose some prezition but earn speed
select count(distinct sec)*d from times t, seconds s,
(select min(timefrom) m from times where id=10) as m,
(select 10 d) as d
where s.sec between (t.timefrom-m)/d and (t.timeto-m)/d-1
and id=10;
Sure it depends on the range you have to look at, but a 'day' or two of seconds should work (although i did not test it)
fiddle ...

How to select based on maximum time interval

I have a table as follows in Postgres SQL 8.4:
1 | John Smith | 2011-08-12 12:44:13.125+08
2 | John Smith | 2011-08-16 08:38:57.968+08
3 | John Smith | 2011-08-16 08:38:58.062+08
4 | Kenny Long | 2011-08-16 17:06:35.843+08
5 | Kenny Long | 2011-08-16 17:06:35.906+08
6 | Kenny Long | 2011-08-16 17:06:59.281+08
7 | Kenny Long | 2011-08-16 17:07:00.234+08
8 | Kenny Long | 2011-08-16 17:07:32.859+08
9 | Kenny Long | 2011-08-16 17:08:00.437+08
10 | Kenny Long | 2011-08-16 17:08:22.718+08
11 | Kenny Long | 2011-08-16 17:08:22.781+08
I would like to select the columns based timestamp. Only one row is needed for those records that fall within 2 minutes from each other. For example records number 4 to 9 should return only row number 4 and ignore the rest of the rows.
How can I achieve this? Your help is greatly appreciated.
Thank you in advance.
Joe Liew
I've tried it with a recursive way. I'm not sure it's the better way, and I'm quite sure I should study some Window operations to reduce it.
But It worked on my test case. The goal is to start with one min timestamp per guy, then track which rows are to be deleted (within the 2 min range), and which row is the next valid one. Then at each iteration we continue from this valid row (one per guy).
So here is the query for table myschema.mytable with colums id,name,tm. Note that the level column is there just to track recursion and debug, not necessary:
WITH RECURSIVE mytmp(id,name,thetime,thelevel) AS (
-- recursive query: 1st row
-- starting point, one row of the table for each people
-- with a subquery to get the min time with id, maybe a better way to do it
(
select myschema.mytable.id,myschema.mytable.name,myschema.mytable.tm as thetime,1 as thelevel
from (
select name,min(tm) as mintm
from myschema.mytable
group by name
) q,myschema.mytable
WHERE myschema.mytable.name=q.name
AND myschema.mytable.tm=q.mintm
ORDER BY name ASC) -- end of starting point of recursive query
UNION ALL
-- now the recursive part, starting from the 1st row and then again and again (loop)
-- get descendants in the 2 minutes interval for every computed row already in mytmp
--
-- get from previous iterations targets, one per guy
-- and track the 1st new valid row (>2min) for that guy
-- removing bad rows (<2min) is easy, several way to do it
-- keeping only one valid row (and not all the others is harder, limit and aggregates functions are restricted in recursive terms
-- we must keep only one, as the future valid rows will depend on the 2 minutes range from this one
-- maybe some window function could help me, but at least I've a working solution
select myschema.mytable.id,myschema.mytable.name,myschema.mytable.tm as thetime,q2.thelevel
FROM myschema.mytable,(
-- here need to keep 1st true one
select myschema.mytable.name,MIN(myschema.mytable.tm) as tm,mytmp2.thelevel +1 as thelevel
FROM myschema.mytable,(
select id,name,thetime,thelevel
from mytmp
) mytmp2
-- hack: mytmp2 is useless, mytmp should have been used
-- we create this indirection to avoid this message:
-- "ERROR: aggregate functions not allowed in a recursive query's recursive term"
-- on the MIN functions
-- I do not know why it worked :-)
WHERE myschema.mytable.name=mytmp2.name
-- future
AND myschema.mytable.tm - mytmp2.thetime > INTERVAL '0'
GROUP BY
-- hack the group by, to make 2 groups
-- the first one for rows in the 2 min range and the second one for others
CASE WHEN ((myschema.mytable.tm - mytmp2.thetime) > INTERVAL '2 minutes') THEN 1 ELSE 2 END,
myschema.mytable.name,mytmp2.thelevel,mytmp2.thetime
-- then with the having we keep only the second group, containing the first valid > 2min row
HAVING ((MIN(myschema.mytable.tm) - mytmp2.thetime) > INTERVAL '2 minutes')=true
) q2 -- q2contains 1st true row and all false rows for each people
-- q2 is used to get the id, that we cannot have in a group by request
WHERE q2.tm=myschema.mytable.tm
AND q2.name=myschema.mytable.name
) -- end of recursive query
SELECT *
FROM mytmp
ORDER BY name asc, thelevel asc,thetime asc
-- LIMIT 100 -- to debug, avoid infinite loops
Another solution would maybe be using a stored procedure, doing the same things in a temporary table (take valid rows, delete rows in the 2min range, then take the next valid ones, etc), maybe easier to maintain.
Just some idea, not tested. Windowing function need 8.4 / later.
SELECT * FROM
(SELECT
name,
tm,
case
when lagname is NULL OR -- first row of everything
(name <> lagname) OR -- we have order by name, this is first row of this name
(name = lagname AND lagtm + interval '2 minutes' >= tm)
then 1
else 0
end as flags
FROM
(
SELECT name,
tm,
lag(name) over (order by name,tm) as lagname,
lag(tm) over (order by name,tm) as lagtm
from "table"."table"
) AS lagtable
) AS blar
WHERE "flags" = 1