I have a table as shown below.
time
Event
2021-03-19T17:15:05
A
2021-03-19T17:15:11
B
2021-03-19T17:15:11
C
2021-03-19T17:15:12
A
2021-03-19T17:15:14
C
I want to find the average time between event A and the event following it.
How do I find it using an SQL query?
here desired output is: 4 seconds.
I really appreciate any help you can provide.
The basic idea is lead() to get the time from the next row. Then you need to calculate the difference. So for all rows:
select t.*,
(to_unix_timestamp(lead(time) over (order by time) -
to_unix_timestamp(time)
) as diff_seconds
from t;
Use a subquery and filtering for just A and the average:
select avg(diff_seconds)
from (select t.*,
(to_unix_timestamp(lead(time) over (order by time) -
to_unix_timestamp(time)
) as diff_seconds
from t
) t
where event = 'A';
Related
I am trying to create following logic in Alteryx and data is coming from Exasol database.
Column “Sum_Qty_28_days“ should sum up the values of “Qty ” column for same article which falls under last 28 days.
My sample data looks like:
and I want following output:
E.g. “Sum_Qty_28_days” value for “article” = ‘A’ and date = ‘’2019-10-8” is 8 because it is summing up the “Qty” values associated with dates (coming within previous 28 days) Which are:
2019-09-15
2019-10-05
2019-10-08
for “article” = ‘A’.
Is this possible using SQL window function?
I tried myself with following code:
SUM("Qty") OVER (PARTITION BY "article", date_trunc('month',"Date")
ORDER BY "Date")
But, it is far from what I need. It is summing up the Qty for dates falling in same month. However, I need to sum of Qty for last 28 days.
Thanks in advance.
Yes, this is possible using standard SQL and in many databases. However, this will not work in all databases:
select t.*,
sum(qty) over (partition by article
order by date
range between interval '27 day' preceding and current row
) as sum_qty_28_days
from t;
If your RDBMS does not support the range frame, an alternative solution is to use an inline subquery:
select
t.*,
(
select sum(t1.qty)
from mytable t1
where
t1.article = t.article
and t1.date between t.date - interval 28 days and t.date
) sum_qty_28_days
from mytable t
I am trying to get a lag of one my column on an irregular time series. The data would be as follow
time stamp (seconds), temperature
1, 20
4,12
6,13
7,18
the new dataset should be as follow
time stamp (seconds), temperature, lagged_1_temperature
1, 20,0
4,12,0
6,13,0
7,18,13
As seen just the lag for last row is a non zero.
For a typical lag I use bellow hive query inside my spark application.
"select timestamp, value ,lag(value,1) OVER (ORDER BY timestamp) as lagged_1_value"
Can I change above hive query to give me the result I want
You can do this with a case expression.
select t.*,
case when timestmp-coalesce(lag(timestmp,1) over(order by timestmp),0)=1
then coalesce(lag(temperature,1) over(order by timestmp),0)
else 0 end as lagged_1_termperature
from t
A simple left join might be more efficient:
select t.*,
coalesce(tprev.value, 0) as prev_value
from t left join
t tprev
on tprev.timestmp = t.timestmp - 1;
Here's an example of what I'm looking for:
I have data that comes in as a lifetime total in gallons. I want to be able to display the data as a running total over the time period I am selecting for rather than as a lifetime total. For example:
timestamp lifetimeTotal runningTotal
1:30 3000 0
1:31 3001 1
1:32 3005 5
1:33 3010 10
I'm not sure how to go about doing this. I was looking at examples like this one using over but it's not quite what I'm looking for: I don't want to add the rows together every time, rather I want to add the difference between the two rows. Right now I am simply selecting the lifetime totals and displaying that.
Any ideas? I will add code if necessary but there's not much to show besides my select statement; I am having trouble thinking of a way to do this conceptually.
This should give difference between the current lifetimeTotal and the min lifetimeTotal
SELECT timestamp,
lifetimeTotal,
lifetimeTotal - MIN(lifetimeTotal) OVER () as runningTotal
FROM Table
This can be easily done using window functions:
SELECT [timestamp], lifetimeTotal,
COALESCE(SUM(diff) OVER (ORDER BY [timestamp]), 0) AS runningTotal
FROM (
SELECT [timestamp],
lifetimeTotal,
lifetimeTotal - LAG(lifetimeTotal) OVER (ORDER BY [timestamp]) AS diff
FROM mytable ) t
The above query uses LAG to calculate the difference between current and previous row. SUM OVER is then used in an outer query to calculate the running total of the difference.
Demo here
In BigQuery, we're trying to run:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT value, UTC_USEC_TO_DAY(timestamp) as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [Datastore.PerformanceDatum]
WHERE type = "MemoryPerf"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
which returns a relatively small amount of data. But we're getting the message:
Error: Resources exceeded during query execution. The query contained a GROUP BY operator, consider using GROUP EACH BY instead. For more details, please see https://developers.google.com/bigquery/docs/query-reference#groupby
What is making this query fail, the size of the subquery? Is there some equivalent query we can do which avoids the problem?
Edit in response to comments: If I add GROUP EACH BY (and drop the outer ORDER BY), the query fails, claiming GROUP EACH BY is here not parallelizable.
I wrote an equivalent query that works for me:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT data value, UTC_USEC_TO_DAY(dtimestamp) as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [io_sensor_data.moscone_io13]
WHERE sensortype = "humidity"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
If I run only the inner query, I get 3,660,624 results. Is your dataset bigger than that?
The outer select gives me only 4 results when grouped by day. I'll try a different grouping to see if I can hit a limit there:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT data value, dtimestamp / 1000 as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [io_sensor_data.moscone_io13]
WHERE sensortype = "humidity"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
Runs too, now with 57,862 different groups.
I tried different combinations to get to the same error. I was able to get the same error as you doubling the amount of initial data. An easy "hack" to double the amount of data is changing:
FROM [io_sensor_data.moscone_io13]
To:
FROM [io_sensor_data.moscone_io13], [io_sensor_data.moscone_io13]
Then I get the same error. How much data do you have? Can you apply an additional filter? As you are already partitioning the percentile_rank by day, can you add an additional query to only analyze a fraction of the days (for example, only last month)?
I have a table with sequential timestamps:
2011-03-17 10:31:19
2011-03-17 10:45:49
2011-03-17 10:47:49
...
I need to find the average time difference between each of these(there could be dozens) in seconds or whatever is easiest, I can work with it from there. So for example the above inter-arrival time for only the first two times would be 870 (14m 30s). For all three times it would be: (870 + 120)/2 = 445 (7m 25s).
A note, I am using postgreSQL 8.1.22 .
EDIT: The table I mention above is from a different query that is literally just a one-column list of timestamps
Not sure I understood your question completely, but this might be what you are looking for:
SELECT avg(difference)
FROM (
SELECT timestamp_col - lag(timestamp_col) over (order by timestamp_col) as difference
FROM your_table
) t
The inner query calculates the distance between each row and the preceding row. The result is an interval for each row in the table.
The outer query simply does an average over all differences.
i think u want to find avg(timestamptz).
my solution is avg(current - min value). but since result is interval, so add it to min value again.
SELECT avg(target_col - (select min(target_col) from your_table))
+ (select min(target_col) from your_table)
FROM your_table
If you cannot upgrade to a version of PG that supports window functions, you
may compute your table's sequential steps "the slow way."
Assuming your table is "tbl" and your timestamp column is "ts":
SELECT AVG(t1 - t0)
FROM (
-- All this silliness would be moot if we could use
-- `` lead(ts) over (order by ts) ''
SELECT tbl.ts AS t0,
next.ts AS t1
FROM tbl
CROSS JOIN
tbl next
WHERE next.ts = (
SELECT MIN(ts)
FROM tbl subquery
WHERE subquery.ts > tbl.ts
)
) derived;
But don't do that. Its performance will be terrible. Please do what
a_horse_with_no_name suggests, and use window functions.