I am trying to determine the amount of time my data spends above a certain threshold. I have a SQL table of values that looks like this:
Where the first column is datetime and the second column is value. This is time series data so it is a large table and cannot be changed. I want to know the first value that crosses over the threshold (say it is 50 for the example) this is my beginning, the last value that crosses back over the threshold which is the end, and the duration spent over the threshold.
In my data example the Beginning would be 9/20/2019 19:18, the end would be 9/20/2019 19:46 and the duration would be 28 minutes.
This needs to be written in one sql statement due to the requirements of the project. I am just wondering if this is possible and how to do it. Thanks!
You can use lead() and some aggregation:
select t.*
from (select t.*,
datediff(minute,
ts, lead(ts) over (order by ts)
) as diff_minutes
from (select t.*,
lead(value) over (order by ts) as next_value
from t
) t
where (value < 50 and next_value >= 50) or
(value >= 50 and next_value < 50
) t
where value < 50;
Your question is a little tricky because you want the time span to start just before the period in question. That is actually a simplification. The above implements:
Identify the next value.
Keep a row when next_value or current value exceeds the threshold or vice versa. This is the first row before and last row after the period.
Then use lead() to get the ending timestamp.
Finally filter down to just the first row.
Another approach is perhaps simpler. Define the groups based on the count of rows that are under the threshold up to or before the row. This keeps the previous row with the following group.
Then aggregate:
select min(ts), max(ts),
datediff(minute, min(ts), max(ts)) as diff_minute
from (select t.*,
sum(case when value < 50 then 1 else 0 end) over (order by ts) as grp
from t
) t
group by grp;
It looks like you are sampling every 10 seconds. If that is pretty solid, you can just count how many records are above 50 during a selected interval, and multiply by 10 seconds, that will be the duration that exceeds 50.
Related
In Grafana, we want to show bars indicating maximum of 15-minut averages in the choosen time interval. Our data has regular 1-minute intervals. The database is Postgresql.
To show the 15-minute averages, we use the following query:
SELECT
timestamp AS time,
AVG(rawvalue) OVER(ORDER BY timestamp ROWS BETWEEN 7 PRECEDING AND 7 FOLLOWING) AS value,
'15-min Average' AS metric
FROM database.schema
WHERE $__timeFilter(timestamp) AND device = '$Device'
ORDER BY time
To show bars indicating maximum of raw values in the choosen time interval, we use the following query:
SELECT
$__timeGroup(timestamp,'$INTERVAL') AS time,
MAX(rawvalue) AS value,
'Interval Max' AS metric
FROM database.schema
WHERE $__timeFilter(timestamp) AND device = '$Device'
GROUP BY $__timeGroup(timestamp,'$INTERVAL')
ORDER BY time
A naive combination of both solutions does not work:
SELECT
$__timeGroup(timestamp,'$INTERVAL') AS time,
MAX(AVG(rawvalue) OVER(ORDER BY timestamp ROWS BETWEEN 7 PRECEDING AND 7 FOLLOWING)) AS value,
'Interval Max 15-min Average' AS metric
FROM database.schema
WHERE $__timeFilter(timestamp) AND device = '$Device'
GROUP BY $__timeGroup(timestamp,'$INTERVAL')
ORDER BY time
We get error: "pq: aggregate function calls cannot contain window function calls".
There is a suggestion on SO to use "with" (Count by criteria over partition) but I do not know hot to use it in our case.
Use the first query as a CTE (or with) for the second one. The order by clause of the CTE and the where clause of the second query as well as the metric column of the CTE are no longer needed. Alternatively you can use the first query as a derived table in the from clause of the second one.
with t as
(
SELECT
timestamp AS time,
AVG(rawvalue) OVER(ORDER BY timestamp ROWS BETWEEN 7 PRECEDING AND 7 FOLLOWING) AS value
FROM database.schema
WHERE $__timeFilter(timestamp) AND device = '$Device'
)
SELECT
$__timeGroup(time,'$INTERVAL') AS time,
MAX(value) AS value,
'Interval Max 15-min Average' AS metric
FROM t
GROUP BY 1 ORDER BY 1;
Unrelated but what are $__timeFilter and $__timeGroup? Their sematics are clear but where do they come from? BTW you may find this function useful.
I have a table in PostgreSQL that contains the GPS points from cell phones. It has an integer column that stores epoch (the number of seconds from 1960). I want to order the table based on time (epoch column), then, break the trips to sub trips when there is no GPS record for more than 2 minutes.
I did it with GeoPandas. However, it is too slow. I want to do it inside the PostgreSQL. How can I compare each row of the ordered table with the previous row (to see if the epoch has a difference of 2 minutes or more)?
In fact, I do not know how to compare each row with the upper row.
You can use lag():
select t.*
from (select t.*,
lag(timestamp_epoch) over (partition by trip order by timestamp_epoch) as last_timestamp_epoch
from t
) t
where last_timestamp_epoch < timestamp_epoch - 120
I want to order the table based on time (epoch column), then, break the trips to sub trips when there is no GPS record for more than 2 minutes.
After comparing to the previous (or next) row, with the window function lag() (or lead()), form groups based on the gaps to get sub trip numbers:
SELECT *, count(*) FILTER (WHERE step) OVER (PARTITION BY trip ORDER BY timestamp_epoch) AS sub_trip
FROM (
SELECT *
, (timestamp_epoch - lag(timestamp_epoch) OVER (PARTITION BY trip ORDER BY timestamp_epoch)) > 120 AS step
FROM tbl
) sub;
Further reading:
Select longest continuous sequence
I'm trying to determine the length of time in days between using the AR_Event_Creation_Date_Time for every other row. For example, the number of days between the 1 and 2 row, 3rd and 4th, 5th and 6th etc. In other words, there will be a number of days value for every even row and NULL for every odd row. My code below works if there are only two rows per borrower number but falls down when there are more than two. In the results, notice the change in 1002092539
SELECT Borrower_Number,
Workgroup_Name,
FORMAT(AR_Event_Creation_Date_Time,'d','en-us') AS Tag_Date,
Usr_Usrnm,
DATEDIFF(day, LAG(AR_Event_Creation_Date_Time,1) OVER(PARTITION BY
Borrower_Number Order By Borrower_Number), AR_Event_Creation_Date_Time) Diff
FROM Control_Mail
You need to add in a row number. Also your partition by is non-deterministic:
SELECT Borrower_Number,
Workgroup_Name,
FORMAT(AR_Event_Creation_Date_Time,'d','en-us') AS Tag_Date,
Usr_Usrnm,
DATEDIFF(day, LAG(AR_Event_Creation_Date_Time,1) OVER(PARTITION BY Borrower_Number, (rn - 1) / 2 ORDER BY AR_Event_Creation_Date_Time),
AR_Event_Creation_Date_Time) Diff
FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY Borrower_Number ORDER BY AR_Event_Creation_Date_Time) AS rn
FROM Control_Mail
) C
```
My goal is to build an hourly count for records that have a start date/time and an end date/time. The actual records are never more than 24 hours from start to finish but many times are less. It works if I bounce every record against my "clock" which has 24 slots for every date up to "today". But it can take forever to run as there can be 2000 records in a day.
This is the detail I get:
The date/times in green are what I want as the start date/time for a group. The blue date/times are what I want as the end date time for the group.
Like this:
I have tried partitioning but because, in the second pic, the 4th row has the same values as the 2nd row, it groups them together even though there is a time span between them - the third row.
This is a gaps-and-islands problem. The start and end dates match on adjacent rows, so a difference of row numbers seems sufficient:
select id, min(startdatetime), max(enddatetime),
d_id, class, location
from (select t.*,
row_number() over (partition by id order by startdatetime) as seqnum,
row_number() over (partition by id, d_id, class, location) as seqnum_2
from t
) t
group by id, d_id, class, location, (seqnum - seqnum_2);
order by id, min(startdatetime);
I have a table with signal name, value and timestamp. these signals where recorded at sampling rate of 1sample/sec. Now i want to plot a graph on values of months, and it is becoming very heavy for the system to perform it within seconds. So my question is " Is there any way to view 1 value/minute in other words i want to see every 60th row.?"
You can use the row_number() function to enumerate the rows, and then use modulo arithmetic to get the rows:
select signalname, value, timestamp
from (select t.*,
row_number() over (order by timestamp) as seqnum
from table t
) t
where seqnum % 60 = 0;
If your data really is regular, you can also extract the seconds value and check when that is 0:
select signalname, value, timestamp
from table t
where datepart(second, timestamp) = 0
This assumes that timestamp is stored in an appropriate date/time format.
Instead of sampling, you could use the one minute average for your plot:
select name
, min(timestamp)
, avg(value)
from Yourtable
group by
name
, datediff(minute, '2013-01-01', timestamp)
If you are charting months, even the hourly average might be detailed enough.