This answer to shows how to produce High/Low/Open/Close values from a ticker:
Retrieve aggregates for arbitrary time intervals
I am trying to implement a solution based on this (PG 9.2), but am having difficulty in getting the correct value for first_value().
So far, I have tried two queries:
SELECT
cstamp,
price,
date_trunc('hour',cstamp) AS h,
floor(EXTRACT(minute FROM cstamp) / 5) AS m5,
min(price) OVER w,
max(price) OVER w,
first_value(price) OVER w,
last_value(price) OVER w
FROM trades
Where date_trunc('hour',cstamp) = timestamp '2013-03-29 09:00:00'
WINDOW w AS (
PARTITION BY date_trunc('hour',cstamp), floor(extract(minute FROM cstamp) / 5)
ORDER BY date_trunc('hour',cstamp) ASC, floor(extract(minute FROM cstamp) / 5) ASC
)
ORDER BY cstamp;
Here's a piece of the result:
cstamp price h m5 min max first last
"2013-03-29 09:19:14";77.00000;"2013-03-29 09:00:00";3;77.00000;77.00000;77.00000;77.00000
"2013-03-29 09:26:18";77.00000;"2013-03-29 09:00:00";5;77.00000;77.80000;77.80000;77.00000
"2013-03-29 09:29:41";77.80000;"2013-03-29 09:00:00";5;77.00000;77.80000;77.80000;77.00000
"2013-03-29 09:29:51";77.00000;"2013-03-29 09:00:00";5;77.00000;77.80000;77.80000;77.00000
"2013-03-29 09:30:04";77.00000;"2013-03-29 09:00:00";6;73.99004;77.80000;73.99004;73.99004
As you can see, 77.8 is not what I believe is the correct value for first_value(), which should be 77.0.
I though this might be due to the ambiguous ORDER BY in the WINDOW, so I changed this to
ORDER BY cstamp ASC
but this appears to upset the PARTITION as well:
cstamp price h m5 min max first last
"2013-03-29 09:19:14";77.00000;"2013-03-29 09:00:00";3;77.00000;77.00000;77.00000;77.00000
"2013-03-29 09:26:18";77.00000;"2013-03-29 09:00:00";5;77.00000;77.00000;77.00000;77.00000
"2013-03-29 09:29:41";77.80000;"2013-03-29 09:00:00";5;77.00000;77.80000;77.00000;77.80000
"2013-03-29 09:29:51";77.00000;"2013-03-29 09:00:00";5;77.00000;77.80000;77.00000;77.00000
"2013-03-29 09:30:04";77.00000;"2013-03-29 09:00:00";6;77.00000;77.00000;77.00000;77.00000
since the values for max and last now vary within the partition.
What am I doing wrong? Could someone help me better to understand the relation between PARTITION and ORDER within a WINDOW?
Although I have an answer, here's a trimmed-down pg_dump which will allow anyone to recreate the table. The only thing that's different is the table name.
CREATE TABLE wtest (
cstamp timestamp without time zone,
price numeric(10,5)
);
COPY wtest (cstamp, price) FROM stdin;
2013-03-29 09:04:54 77.80000
2013-03-29 09:04:50 76.98000
2013-03-29 09:29:51 77.00000
2013-03-29 09:29:41 77.80000
2013-03-29 09:26:18 77.00000
2013-03-29 09:19:14 77.00000
2013-03-29 09:19:10 77.00000
2013-03-29 09:33:50 76.00000
2013-03-29 09:33:46 76.10000
2013-03-29 09:33:15 77.79000
2013-03-29 09:30:08 77.80000
2013-03-29 09:30:04 77.00000
\.
SQL Fiddle
All the functions you used act on the window frame, not on the partition. If omitted the frame end is the current row. To make the window frame to be the whole partition declare it in the frame clause (range...):
SELECT
cstamp,
price,
date_trunc('hour',cstamp) AS h,
floor(EXTRACT(minute FROM cstamp) / 5) AS m5,
min(price) OVER w,
max(price) OVER w,
first_value(price) OVER w,
last_value(price) OVER w
FROM trades
Where date_trunc('hour',cstamp) = timestamp '2013-03-29 09:00:00'
WINDOW w AS (
PARTITION BY date_trunc('hour',cstamp) , floor(extract(minute FROM cstamp) / 5)
ORDER BY cstamp
range between unbounded preceding and unbounded following
)
ORDER BY cstamp;
Here's a quick query to illustrate the behaviour:
select
v,
first_value(v) over w1 f1,
first_value(v) over w2 f2,
first_value(v) over w3 f3,
last_value (v) over w1 l1,
last_value (v) over w2 l2,
last_value (v) over w3 l3,
max (v) over w1 m1,
max (v) over w2 m2,
max (v) over w3 m3,
max (v) over () m4
from (values(1),(2),(3),(4)) t(v)
window
w1 as (order by v),
w2 as (order by v rows between unbounded preceding and current row),
w3 as (order by v rows between unbounded preceding and unbounded following)
The output of the above query can be seen here (SQLFiddle here):
| V | F1 | F2 | F3 | L1 | L2 | L3 | M1 | M2 | M3 | M4 |
|---|----|----|----|----|----|----|----|----|----|----|
| 1 | 1 | 1 | 1 | 1 | 1 | 4 | 1 | 1 | 4 | 4 |
| 2 | 1 | 1 | 1 | 2 | 2 | 4 | 2 | 2 | 4 | 4 |
| 3 | 1 | 1 | 1 | 3 | 3 | 4 | 3 | 3 | 4 | 4 |
| 4 | 1 | 1 | 1 | 4 | 4 | 4 | 4 | 4 | 4 | 4 |
Few people think of the implicit frames that are applied to window functions that take an ORDER BY clause. In this case, windows are defaulting to the frame ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. Think about it this way:
On the row with v = 1 the ordered window's frame spans v IN (1)
On the row with v = 2 the ordered window's frame spans v IN (1, 2)
On the row with v = 3 the ordered window's frame spans v IN (1, 2, 3)
On the row with v = 4 the ordered window's frame spans v IN (1, 2, 3, 4)
If you want to prevent that behaviour, you have two options:
Use an explicit ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING clause for ordered window functions
Use no ORDER BY clause in those window functions that allow for omitting them (as MAX(v) OVER())
More details are explained in this article about LEAD(), LAG(), FIRST_VALUE() and LAST_VALUE()
The result of max() as window function is base on the frame definition.
The default frame definition (with ORDER BY) is from the start of the frame up to the last peer of the current row (including the current row and possibly more rows ranking equally according to ORDER BY). In the absence of ORDER BY (like in my answer you are referring to), or if ORDER BY treats every row in the partition as equal (like in your first example), all rows in the partition are peers, and max() produces the same result for every row in the partition, effectively considering all rows of the partition.
Per documentation:
The default framing option is RANGE UNBOUNDED PRECEDING, which is the
same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. With ORDER BY,
this sets the frame to be all rows from the partition start
up through the current row's last peer. Without ORDER BY, all rows of the
partition are included in the window frame, since all rows become
peers of the current row.
Bold emphasis mine.
The simple solution would be to omit the ORDER BY in the window definition - just like I demonstrated in the example you are referring to.
All the gory details about frame specifications in the chapter Window Function Calls in the manual.
Related
When using a window frame clause with a range we define the start point and the end point of the window we aggregate over. If we order by something that has multiple rows for a value the actual row processed is not deterministic and will be somewhere within this set. So will the result include all rows with the same value as current row as well in this case?
https://my.vertica.com/docs/8.1.x/HTML/index.htm#Authoring/AnalyzingData/SQLAnalytics/WindowFraming.htm Does not mention this explicitly but seems to hint at it being the actual non-deterministic row.
So if I have the following table t:
| ts | x |
|------------------ |--- |
| 2017-11-29 10:00 | 1 |
| 2017-11-30 10:00 | 2 |
| 2017-11-30 11:00 | 3 |
| 2017-12-01 11:00 | 4 |
and the following query:
with results as (
select
sum(x) over (order by ts::date range between current row and unbounded following) as r
from t
)
select r from results where ts = '2017-11-30 11:00'
will it say 9 (2+3+4) or will it say either 9 or 7 depending on the how the ordering took place?
How do I include all items with the same value in my window as well?
So when just getting all data from the results you are actually able to test this using the following query:
with results as (
select
sum(x) over (order by ts::date range between current row and unbounded following) as r
from t
)
select r from results
The results are:
r 10 9 9 4 and the two 9's in there mean that it actually includes the whole date since it can't order any further, not just the current row.
I tested this using sqlfiddle on Postgres 9.6 and in Vertica 8.1 directly in the database.
I have a table of time spans that overlap each other. I want to generate a table that covers the same time spans but doesn't overlap.
For example, say I have a table like this:
Start,End
1, 4
3, 5
7, 8
2, 4
I want a new table like this:
Start,End
1, 5
7, 8
What is the SQL query to do this?
Tested on spark-sql version 1.5.2.
(and with small changes - on Teradata, Oracle, PostgreSQL and SQL Server)
In order to guarantee the correctness of this solution the order by clauses in the two analytic functions should be identical and deterministic, so if you have an Id column use order by `Start`,`Id` instead of order by `Start`,`End`
select min(`Start`) as `Start`
,max(`End`) as `End`
from (select `Start`,`End`
,count(is_gap) over
(
order by `Start`,`End`
rows unbounded preceding
) + 1 as range_seq
from (select `Start`,`End`
,case
when max(`End`) over
(
order by `Start`,`End`
rows between unbounded preceding
and 1 preceding
) < `Start`
then 1
end is_gap
from mytable
) t
) t
group by range_seq
order by `Start`
+-------+-----+
| Start | End |
+-------+-----+
| 1 | 5 |
+-------+-----+
| 7 | 8 |
+-------+-----+
I'm trying to select first & last date in window based on month & year of date supplied.
Here is example data:
F.rates
| id | c_id | date | rate |
---------------------------------
| 1 | 1 | 01-01-1991 | 1 |
| 1 | 1 | 15-01-1991 | 0.5 |
| 1 | 1 | 30-01-1991 | 2 |
.................................
| 1 | 1 | 01-11-2014 | 1 |
| 1 | 1 | 15-11-2014 | 0.5 |
| 1 | 1 | 30-11-2014 | 2 |
Here is pgSQL SELECT I came up with:
SELECT c_id, first_value(date) OVER w, last_value(date) OVER w FROM F.rates
WINDOW w AS (PARTITION BY EXTRACT(YEAR FROM date), EXTRACT(MONTH FROM date), c_id
ORDER BY date ASC)
Which gives me a result pretty close to what I want:
| c_id | first_date | last_date |
----------------------------------
| 1 | 01-01-1991 | 15-01-1991 |
| 1 | 01-01-1991 | 30-01-1991 |
.................................
Should be:
| c_id | first_date | last_date |
----------------------------------
| 1 | 01-01-1991 | 30-01-1991 |
.................................
For some reasons last_value(date) returns every record in a window. Which giving me a thought that I'm misunderstanding how windows in SQL works. It's like SQL forming a new window for each row it iterates through, but not multiple windows for entire table based on YEAR and MONTH.
So could any one be kind and explain if I'm wrong and how do I achieve the result I want?
There is a reason why i'm not using MAX/MIN over GROUP BY clause. My next step would be to retrieve associated rates for dates I selected, like:
| c_id | first_date | last_date | first_rate | last_rate | avg rate |
-----------------------------------------------------------------------
| 1 | 01-01-1991 | 30-01-1991 | 1 | 2 | 1.1 |
.......................................................................
If you want your output to become grouped into a single (or just fewer) row(s), you should use simple aggregation (i.e. GROUP BY), if avg_rate is enough:
SELECT c_id, min(date), max(date), avg(rate)
FROM F.rates
GROUP BY c_id, date_trunc('month', date)
More about window functions in PostgreSQL's documentation:
But unlike regular aggregate functions, use of a window function does not cause rows to become grouped into a single output row — the rows retain their separate identities.
...
There is another important concept associated with window functions: for each row, there is a set of rows within its partition called its window frame. Many (but not all) window functions act only on the rows of the window frame, rather than of the whole partition. By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause. When ORDER BY is omitted the default frame consists of all rows in the partition.
...
There are options to define the window frame in other ways ... See Section 4.2.8 for details.
EDIT:
If you want to collapse (min/max aggregation) your data and want to collect more columns than those what listed in GROUP BY, you have 2 choice:
The SQL way
Select min/max value(s) in a sub-query, then join their original rows back (but this way, you have to deal with the fact, that min/max-ed column(s) usually not unique):
SELECT c_id,
min first_date,
max last_date,
first.rate first_rate,
last.rate last_rate,
avg avg_rate
FROM (SELECT c_id, min(date), max(date), avg(rate)
FROM F.rates
GROUP BY c_id, date_trunc('month', date)) agg
JOIN F.rates first ON agg.c_id = first.c_id AND agg.min = first.date
JOIN F.rates last ON agg.c_id = last.c_id AND agg.max = last.date
PostgreSQL's DISTINCT ON
DISTINCT ON is typically meant for this task, but highly rely on ordering (only 1 extremum can be searched for this way at a time):
SELECT DISTINCT ON (c_id, date_trunc('month', date))
c_id,
date first_date,
rate first_rate
FROM F.rates
ORDER BY c_id, date
You can join this query with other aggregated sub-queries of F.rates, but this point (if you really need both minimum & maximum, and in your case even an average) the SQL compliant way is more suiting.
Windowing functions aren't appropriate for this. Use aggregate functions instead.
select
c_id, date_trunc('month', date)::date,
min(date) first_date, max(date) last_date
from rates
group by c_id, date_trunc('month', date)::date;
c_id | date_trunc | first_date | last_date
------+------------+------------+------------
1 | 2014-11-01 | 2014-11-01 | 2014-11-30
1 | 1991-01-01 | 1991-01-01 | 1991-01-30
create table rates (
id integer not null,
c_id integer not null,
date date not null,
rate numeric(2, 1),
primary key (id, c_id, date)
);
insert into rates values
(1, 1, '1991-01-01', 1),
(1, 1, '1991-01-15', 0.5),
(1, 1, '1991-01-30', 2),
(1, 1, '2014-11-01', 1),
(1, 1, '2014-11-15', 0.5),
(1, 1, '2014-11-30', 2);
I have data that is arranged in a ring structure (or circular buffer), that is it can be expressed as sequences that cycle: ...-1-2-3-4-5-1-2-3-.... See this picture to get an idea of a 5-part ring:
I'd like to create a window query that can combine the lag and lead items into a three point array, but I can't figure it out. For example at part 1 of a 5-part ring, the lag/lead sequence is 5-1-2, or at part 4 is 3-4-5.
Here is an example table of two rings with different numbers of parts (always more than three per ring):
create table rp (ring int, part int);
insert into rp(ring, part) values(1, generate_series(1, 5));
insert into rp(ring, part) values(2, generate_series(1, 7));
Here is a nearly successful query:
SELECT ring, part, array[
lag(part, 1, NULL) over (partition by ring),
part,
lead(part, 1, 1) over (partition by ring)
] AS neighbours
FROM rp;
ring | part | neighbours
------+------+------------
1 | 1 | {NULL,1,2}
1 | 2 | {1,2,3}
1 | 3 | {2,3,4}
1 | 4 | {3,4,5}
1 | 5 | {4,5,1}
2 | 1 | {NULL,1,2}
2 | 2 | {1,2,3}
2 | 3 | {2,3,4}
2 | 4 | {3,4,5}
2 | 5 | {4,5,6}
2 | 6 | {5,6,7}
2 | 7 | {6,7,1}
(12 rows)
The only thing I need to do is to replace the NULL with the ending point of each ring, which is the last value. Now, along with lag and lead window functions, there is a last_value function which would be ideal. However, these cannot be nested:
SELECT ring, part, array[
lag(part, 1, last_value(part) over (partition by ring)) over (partition by ring),
part,
lead(part, 1, 1) over (partition by ring)
] AS neighbours
FROM rp;
ERROR: window function calls cannot be nested
LINE 2: lag(part, 1, last_value(part) over (partition by ring)) ...
Update. Thanks to #Justin's suggestion to use coalesce to avoid nesting window functions. Furthermore, it has been pointed out by numerous folks that first/last values need an explicit order by on the ring sequence, which happens to be part for this example. So randomising the input data a bit:
create table rp (ring int, part int);
insert into rp(ring, part) select 1, generate_series(1, 5) order by random();
insert into rp(ring, part) select 2, generate_series(1, 7) order by random();
Use COALESCE like #Justin provided.
With first_value() / last_value() you need to add an ORDER BY clause to the window definition or the order is undefined. You just got lucky in the example, because the rows happen to be in order right after creating the dummy table.
Once you add ORDER BY, the default window frame ends at the current row, and you need to special case the last_value() call - or revert the sort order in the window frame like demonstrated in my first example.
When reusing a window definition multiple times, an explicit WINDOW clause simplifies syntax a lot:
SELECT ring, part, ARRAY[
coalesce(
lag(part) OVER w
,first_value(part) OVER (PARTITION BY ring ORDER BY part DESC))
,part
,coalesce(
lead(part) OVER w
,first_value(part) OVER w)
] AS neighbours
FROM rp
WINDOW w AS (PARTITION BY ring ORDER BY part);
Better yet, reuse the same window definition, so Postgres can calculate all values in a single scan. For this to work we need to define a custom window frame:
SELECT ring, part, ARRAY[
coalesce(
lag(part) OVER w
,last_value(part) OVER w)
,part
,coalesce(
lead(part) OVER w
,first_value(part) OVER w)
] AS neighbours
FROM rp
WINDOW w AS (PARTITION BY ring
ORDER BY part
RANGE BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING)
ORDER BY 1,2;
You can even adapt the frame definition for each window function call:
SELECT ring, part, ARRAY[
coalesce(
lag(part) OVER w
,last_value(part) OVER (w RANGE BETWEEN CURRENT ROW
AND UNBOUNDED FOLLOWING))
,part
,coalesce(
lead(part) OVER w
,first_value(part) OVER w)
] AS neighbours
FROM rp
WINDOW w AS (PARTITION BY ring ORDER BY part)
ORDER BY 1,2;
Might be faster for rings with many parts. You'll have to test.
SQL Fiddle demonstrating all three with an improved test case. Consider query plans.
More about window frame definitions:
In the manual.
PostgreSQL window function: partition by comparison
PostgreSQL query with max and min date plus associated id per row
Query:
SQLFIDDLEExample
SELECT ring, part, array[
coalesce(lag(part, 1, NULL) over (partition by ring),
max(part) over (partition by ring)),
part,
lead(part, 1, 1) over (partition by ring)
] AS neighbours
FROM rp;
Result:
| RING | PART | NEIGHBOURS |
|------|------|------------|
| 1 | 1 | 5,1,2 |
| 1 | 2 | 1,2,3 |
| 1 | 3 | 2,3,4 |
| 1 | 4 | 3,4,5 |
| 1 | 5 | 4,5,1 |
| 2 | 1 | 7,1,2 |
| 2 | 2 | 1,2,3 |
| 2 | 3 | 2,3,4 |
| 2 | 4 | 3,4,5 |
| 2 | 5 | 4,5,6 |
| 2 | 6 | 5,6,7 |
| 2 | 7 | 6,7,1 |
I have a database that has a table called matchstats which includes a column called time and it is updated each time an action takes place. I also have a column called groundstatsid which when it is not null means the action took place on the ground as opposed to standing. Finally I have a column called Round.
Example:
Time | groundstatsid | Round
1 | NULL | 1
8 | NULL | 1
15 | NULL | 1
18 | 1 | 1
20 | 1 | 1
22 | NULL | 1
30 | NULL | 1
1 | NULL | 2
To get the full time standing I would basically want the query to take the first time (1) and store that, then look at groundstatsid until it sees a NON NULL value and take the time at that position, subtract by the earlier number stored to get the time in standup (17). Then it would continue to look for where groundstatsid IS NULL. Once it finds that value it should do the same process of looking until it finds a NON NULL value in groundstatsid or a new round, in which case it will start the whole process again.
Once it has gone through an entire match I would want it to Sum the results.
I would expect the query of the example to return 25.
I would boil this problem down one where you consider pairs of rows, sorted by time within each round. PostgreSQL can do this in one pass -- no JOINs, no PL/pgSQL -- using window functions:
SELECT
round,
first_value(time) OVER pair AS first_time,
last_value(time) OVER pair AS last_time,
first_value(groundstatsid IS NULL) OVER pair AS first_is_standing,
last_value(groundstatsid IS NULL) OVER pair AS last_is_standing
FROM matchstats
WINDOW pair AS (PARTITION BY round ORDER BY time ROWS 1 PRECEDING);
This tells PostgreSQL to read the rows from the table (presumably constrained by WHERE fightid=? or something), but to consider each round separately for windowing operations. Window functions like first_value and last_value can access the "window", which I specified to be ORDER BY time ROWS 1 PRECEDING, meaning the window contains both the current row and the one immediately preceding it in time (if any). Thus, window functions let us directly output values for both the current row and its predecessor.
For the data you provided, this query yields:
round | first_time | last_time | first_is_standing | last_is_standing
-------+------------+-----------+-------------------+------------------
1 | 1 | 1 | t | t
1 | 1 | 8 | t | t
1 | 8 | 15 | t | t
1 | 15 | 18 | t | f
1 | 18 | 20 | f | f
1 | 20 | 22 | f | t
1 | 22 | 30 | t | t
2 | 1 | 1 | t | t
Looking at these results helped me decide what to do next. Based on my understanding of your logic, I conclude that the person should be regarded as standing from time 1..1, 1..8, 8..15, 15..18, not standing from 18..20, not standing from 20..22, and is standing again from 22..30. In other words, we want to sum the difference between first_time and last_time where first_is_standing is true. Turning that back into SQL:
SELECT round, SUM(last_time - first_time) AS total_time_standing
FROM (
SELECT
round,
first_value(time) OVER pair AS first_time,
last_value(time) OVER pair AS last_time,
first_value(groundstatsid IS NULL) OVER pair AS first_is_standing,
last_value(groundstatsid IS NULL) OVER pair AS last_is_standing
FROM matchstats
WINDOW pair AS (PARTITION BY round ORDER BY time ROWS 1 PRECEDING)
) pairs
WHERE first_is_standing
GROUP BY round;
round | total_time_standing
-------+---------------------
1 | 25
2 | 0
You could also get other values from this same inner query, like the total time or the number of falls by using SUM(CASE WHEN ...) to count independent conditions:
SELECT
round,
SUM(CASE WHEN first_is_standing THEN last_time - first_time ELSE 0 END) AS total_time_standing,
SUM(CASE WHEN first_is_standing AND NOT last_is_standing THEN 1 ELSE 0 END) AS falls,
SUM(last_time - first_time) AS total_time
FROM (
SELECT
round,
first_value(time) OVER pair AS first_time,
last_value(time) OVER pair AS last_time,
first_value(groundstatsid IS NULL) OVER pair AS first_is_standing,
last_value(groundstatsid IS NULL) OVER pair AS last_is_standing
FROM matchstats
WINDOW pair AS (PARTITION BY round ORDER BY time ROWS 1 PRECEDING)
) pairs
GROUP BY round;
round | total_time_standing | falls | total_time
-------+---------------------+-------+------------
1 | 25 | 1 | 29
2 | 0 | 0 | 0
This will calculate standing time for any number of rounds:
SELECT round, sum(down_time - up_time) AS standing_time
FROM (
SELECT round, grp, standing, min(time) AS up_time
,CASE WHEN standing THEN
lead(min(time), 1, max(time)) OVER (PARTITION BY round
ORDER BY min(time))
ELSE NULL END AS down_time
FROM (
SELECT round, time, groundstatsid IS NULL AS standing
,count(groundstatsid) OVER (PARTITION BY round
ORDER BY time) AS grp
FROM tbl
) x
GROUP BY 1, 2, standing
) y
WHERE standing
GROUP BY round
ORDER BY round;
-> sqlfiddle
Explain
Subquery x:
Exploit the fact that count() doesn't count NULL values (neither as aggregate nor as window function). Successive rows with "standing" action (groundstatsid IS NULL) end up with the same value for grp.
Simplify groundstatsid to a boolean var standing, for ease of use and elegance.
Subquery y:
Aggregate per group - standing time matters. From ground time we only need the first row after each standing phase.
Take the minimum time per group as up_time (standing up)
Take the time from the following row (lead(min(time) ...) as down_time (going on the ground). Note that you can use aggregated values in a window function:
lead(min(time), 1, max(time)) OVER ... takes the next min(time) per round an defaults to max(time) of the current row if the round is over (no next row).
Final SELECT:
Only take standing time into account: WHERE groundstatsid IS NULL
sum(down_time - up_time) aggregates the total standing time per round.
Result ordered per round. Voilá.
This makes heavy use of window functions. Needs PostgreSQL 8.4 or later.
You could do the same procedurally in a plpgsql function if performance is your paramount requirement.
Examples here or here.