I have a table with the following data (paypal transactions):
txn_type | date | subscription_id
----------------+----------------------------+---------------------
subscr_signup | 2014-01-01 07:53:20 | S-XXX01
subscr_signup | 2014-01-05 10:37:26 | S-XXX02
subscr_signup | 2014-01-08 08:54:00 | S-XXX03
subscr_eot | 2014-03-01 08:53:57 | S-XXX01
subscr_eot | 2014-03-05 08:58:02 | S-XXX02
I want to get the average subscription length overall for a given time period (subscr_eot is the end of a subscription). In the case of a subscription that is still ongoing ('S-XXX03') I want it to be included from it's start date until now in the average.
How would I go about doing this with an SQL statement in Postgres?
SQL Fiddle. Subscription length for each subscription:
select
subscription_id,
coalesce(t2.date, current_timestamp) - t1.date as subscription_length
from
(
select *
from t
where txn_type = 'subscr_signup'
) t1
left join
(
select *
from t
where txn_type = 'subscr_eot'
) t2 using (subscription_id)
order by t1.subscription_id
The average:
select
avg(coalesce(t2.date, current_timestamp) - t1.date) as subscription_length_avg
from
(
select *
from t
where txn_type = 'subscr_signup'
) t1
left join
(
select *
from t
where txn_type = 'subscr_eot'
) t2 using (subscription_id)
I used a couple of common table expressions; you can take the pieces apart pretty easily to see what they do.
One of the reasons this SQL is complicated is because you're storing column names as data. (subscr_signup and subscr_eot are actually column names, not data.) This is a SQL anti-pattern; expect it to cause you much pain.
with subscription_dates as (
select
p1.subscription_id,
p1.date as subscr_start,
coalesce((select min(p2.date)
from paypal_transactions p2
where p2.subscription_id = p1.subscription_id
and p2.txn_type = 'subscr_eot'
and p2.date > p1.date), current_date) as subscr_end
from paypal_transactions p1
where txn_type = 'subscr_signup'
), subscription_days as (
select subscription_id, subscr_start, subscr_end, (subscr_end - subscr_start) + 1 as subscr_days
from subscription_dates
)
select avg(subscr_days) as avg_days
from subscription_days
-- add your date range here.
avg_days
--
75.6666666666666667
I didn't add your date range as a WHERE clause, because it's not clear to me what you mean by "a given time period".
Using the window function lag(), this becomes considerably shorter:
SELECT avg(ts_end - ts) AS avg_subscr
FROM (
SELECT txn_type, ts, lag(ts, 1, localtimestamp)
OVER (PARTITION BY subscription_id ORDER BY txn_type) AS ts_end
FROM t
) sub
WHERE txn_type = 'subscr_signup';
SQL Fiddle.
lag() conveniently takes a default value for missing rows. Exactly what we need here, so we don't need COALESCE in addition.
The query builds on the fact that subscr_eot sorts before subscr_signup.
Probably faster than presented alternatives so far because it only needs a single sequential scan - even though the window functions add some cost.
Using the column ts instead of date for three reasons:
Your "date" is actually a timestamp.
"date" is a reserved word in standard SQL (even if it's allowed in Postgres).
Never use basic type names as identifiers.
Using localtimestamp instead of now() or current_timestamp since you are obviously operating with timestamp [without time zone].
Also, your columns txn_type and subscription_id should not be text
Maybe an enum for txn_type and integer for subscription_id. That would make table and indexes considerably smaller and faster.
For the query at hand, the whole table has to be read an indexes won't help - except for a covering index in Postgres 9.2+, if you need the read performance:
CREATE INDEX t_foo_idx ON t (subscription_id, txn_type, ts);
Related
I am trying to get all results from Oracle DB using SQL Developer by corresponding date.
My data:
ID | date_time_of_identification
--------------------------------------------
1240088696 | 22-SEP-19 06.24.23.432000000 AM
1239485087 | 21-SEP-19 09.25.45.912000000 AM
1239228398 | 21-SEP-19 07.18.40.555000000 AM
1239223300 | 21-SEP-19 07.16.39.812000000 AM
1233224199 | 18-SEP-19 10.54.04.023000000 AM
1232432331 | 18-SEP-19 05.06.40.383000000 AM
1231492850 | 17-SEP-19 01.06.05.316000000 PM
So I am trying to get all rows from 21.09.2019, then I am writing:
select * from mytable where date_time_of_identification = TO_DATE('2019/09/21', 'yyyy/mm/dd'); -- no result
Now I am trying to write better query:
select * from mytable
where to_char(date_time_of_identification, 'yyyy/mm/dd') = to_char(TO_DATE('2019/09/21', 'yyyy/mm/dd'), 'yyyy/mm/dd');
It returns good result, but Is there a better solution?
You'll have to truncate your date from column to lose the timestamp part:
select *
from mytable
where trunc(date_time_of_identification) = TO_DATE('2019/09/21', 'yyyy/mm/dd');
Assuming that your predicate is reasonably selective (i.e. the number of rows on a particular day is a small fraction of the number of rows in the table), you'd generally want your query to be able to use an index on date_time_of_identification. If you apply a function to that column, you won't be able to use an index. So you'd generally want to write this as
select *
from myTable
where date_time_of_identification >= date '2019-09-21'
and date_time_of_identification < date '2019-09-22'
The alternative would be to create a function-based index on date_time_of_identification and then use that function in the query.
create index fbi_myTable
on trunc( date_time_of_identification );
select *
from myTable
where trunc( date_time_of_identification ) = date '2019-09-21';
The gem we have installed (Blazer) on our site limits us to one query.
We are trying to write a query to show how many hours each employee has for the past 10 days. The first column would have employee names and the rest would have hours with the column header being each date. I'm having trouble figuring out how to make the column headers dynamic based on the day. The following is an example of what we have working without dynamic column headers and only using 3 days.
SELECT
pivot_table.*
FROM
crosstab(
E'SELECT
"User",
"Date",
"Hours"
FROM
(SELECT
"q"."qdb_users"."name" AS "User",
to_char("qdb_works"."date", \'YYYY-MM-DD\') AS "Date",
sum("qdb_works"."hours") AS "Hours"
FROM
"q"."qdb_works"
LEFT OUTER JOIN
"q"."qdb_users" ON
"q"."qdb_users"."id" = "q"."qdb_works"."qdb_user_id"
WHERE
"qdb_works"."date" > current_date - 20
GROUP BY
"User",
"Date"
ORDER BY
"Date" DESC,
"User" DESC) "x"
ORDER BY 1, 2')
AS
pivot_table (
"User" VARCHAR,
"2017-10-06" FLOAT,
"2017-10-05" FLOAT,
"2017-10-04" FLOAT
);
This results in
| User | 2017-10-05 | 2017-10-04 | 2017-10-03 |
|------|------------|------------|------------|
| John | 1.5 | 3.25 | 2.25 |
| Jill | 6.25 | 6.25 | 6 |
| Bill | 2.75 | 3 | 4 |
This is correct, but tomorrow, the column headers will be off unless we update the query every day. I know we could pivot this table with date on the left and names on the top, but that will still need updating with each new employee – and we get new ones often.
We have tried using functions and queries in the "AS" section with no luck. For example:
AS
pivot_table (
"User" VARCHAR,
current_date - 0 FLOAT,
current_date - 1 FLOAT,
current_date - 2 FLOAT
);
Is there any way to pull this off with one query?
You could select a row for each user, and then per column sum the hours for one day:
with user_work as
(
select u.name as user
, to_char(w.date, 'YYYY-MM-DD') as dt_str
, w.hours
from qdb_works w
join qdb_users u
on u.id = w.qdb_user_id
where w.date >= current_date - interval '2 days'
)
select User
, sum(case when dt_str = to_char(current_date,
'YYYY-MM-DD') then hours end) as Today
, sum(case when dt_str = to_char(current_date - 'interval 1 day',
'YYYY-MM-DD') then hours end) as Yesterday
, sum(case when dt_str = to_char(current_date - 'interval 2 days',
'YYYY-MM-DD') then hours end) as DayBeforeYesterday
from user_work
group by
user
, dt_str
It's often easier to return a list and pivot it client side. That also allows you to generate column names with a date.
Is there any way to pull this off with one query?
No, because a fixed SQL query cannot have any variability in its output columns. The SQL engine determines the number, types and names of every column of a query before executing it, without reading any data except in the catalog (for the structure of tables and other objects), execution being just the last of 5 stages.
A single-query dynamic pivot, if such a thing existed, couldn't be prepared, since a prepared query always have the same results structure, whereas by definition a dynamic pivot doesn't, as the rows that pivot into columns can change between executions. That would be at odds again with the Prepare-Bind-Execute model.
You may find some limited workarounds and additional explanations in other questions, for example: Execute a dynamic crosstab query, but since you mentioned specifically:
The gem we have installed (Blazer) on our site limits us to one
query
I'm afraid you're out of luck. Whatever the workaround, it always need at best one step with a query to figure out the columns and generate a dynamic query from them, and a second step executing the query generated at the previous step.
Can I have a view with an infinite number of rows? I don't want to
select all the rows at once, but is it possible to have a view that
represents a repeating weekly schedule, with rows for any date?
I have a database with information about businesses, their hours on
different days of the week. Their names:
# SELECT company_name FROM company;
company_name
--------------------
Acme, Inc.
Amalgamated
...
(47 rows)
Their weekly schedules:
# SELECT days, open_time, close_time
FROM hours JOIN company USING(company_id)
WHERE company_name='Acme, Inc.';
days | open_time | close_time
---------+-----------+-----------
1111100 | 08:30:00 | 17:00:00
0000010 | 09:00:00 | 12:30:00
Another table, not shown, has holidays they're closed.
So I can trivially create a user-defined function in the form of a
stored procedure that takes a particular date as an argument and
returns the business hours of each company:
SELECT company_name,open_time,close_time FROM schedule_for(current_date);
But I want to do it as a table query, in order that any
SQL-compatible host-language library will have no problem
interfacing with it, like this:
SELECT company_name, open_time, close_time
FROM schedule_view
WHERE business_date=current_date;
Relational database theory tells me that tables (relations) are
functions in the sense of being a unique mapping from each
primary key to a row (tuple). Obviously if the WHERE clause on
the above query were omitted it would result in a table (view)
having an infinite number of rows, which would be a practical issue. But
I'm willing to agree never to query such a view without a WHERE
clause that restricts the number of rows.
How can such a view be created (in PostgreSQL)? Or is a view even the way to do what I want?
Update
Here are some more details about my tables. The days of the week are saved as bits, and I select the appropriate row using a bit mask that has a single bit shifted once for each day of the requested week. To wit:
The company table:
# \d company
Table "company"
Column | Type | Modifiers
----------------+------------------------+-----------
company_id | smallint | not null
company_name | character varying(128) | not null
timezone | timezone | not null
The hours table:
# \d hours
Table "hours"
Column | Type | Modifiers
------------+------------------------+-----------
company_id | smallint | not null
days | bit(7) | not null
open_time | time without time zone | not null
close_time | time without time zone | not null
The holiday table:
# \d holiday
Table "holiday"
Column | Type | Modifiers
---------------+----------+-----------
company_id | smallint | not null
month_of_year | smallint | not null
day_of_month | smallint | not null
The function I currently have that does what I want (besides invocation) is defined as:
CREATE FUNCTION schedule_for(requested_date date)
RETURNS table(company_name text, open_time timestamptz, close_time timestamptz)
AS $$
WITH field AS (
/* shift the mask as many bits as the requested day of the week */
SELECT B'1000000' >> (to_char(requested_date,'ID')::int -1) AS day_of_week,
to_char(requested_date, 'MM')::int AS month_of_year,
to_char(requested_date, 'DD')::int AS day_of_month
)
SELECT company_name,
(requested_date+open_time) AT TIME ZONE timezone AS open_time,
(requested_date+close_time) AT TIME ZONE timezone AS close_time
FROM hours INNER JOIN company USING (company_id)
CROSS JOIN field
CROSS JOIN holiday
/* if the bit-mask anded with the DOW is the DOW */
WHERE (hours.days & field.day_of_week) = field.day_of_week
AND NOT EXISTS (SELECT 1
FROM holiday h
WHERE h.company_id = hours.company_id
AND field.month_of_year = h.month_of_year
AND field.day_of_month = h.day_of_month);
$$
LANGUAGE SQL;
So again, my goal is to be able to get today's schedule by doing this:
SELECT open_time, close_time FROM schedule_view
wHERE company='Acme,Inc.' AND requested_date=CURRENT_DATE;
and also be able to get the schedule for any arbitrary date by doing this:
SELECT open_time, close_time FROM schedule_view
WHERE company='Acme, Inc.' AND requested_date=CAST ('2013-11-01' AS date);
I'm assuming this would require creating the view here referred to as schedule_view but maybe I'm mistaken about that. In any event I want to keep any messy SQL code hidden from usage at the command-line-interface and client-language database libraries, as it currently is in the user-defined function I have.
In other words, I just want to invoke the function I already have by passing the argument in a WHERE clause instead of inside parentheses.
You could create a view with infinite rows by using a recursive CTE. But even that needs a starting point and a terminating condition or it will error out.
A more practical approach with set returning functions (SRF):
WITH x AS (SELECT '2013-10-09'::date AS day) -- supply your date
SELECT company_id, x.day + open_time AS open_ts
, x.day + close_time AS close_ts
FROM (
SELECT *, unnest(arr)::bool AS open, generate_subscripts(arr, 1) AS dow
FROM (SELECT *, string_to_array(days::text, NULL) AS arr FROM hours) sub
) sub2
CROSS JOIN x
WHERE open
AND dow = EXTRACT(ISODOW FROM x.day);
-- AND NOT EXISTS (SELECT 1 FROM holiday WHERE holiday = x.day)
-> SQLfiddle demo. (with constant day)
Expanding SRFs side-by-side is generally frowned upon (and for good reason, it's not in the SQL standard and show surprising behavior if the number of elements is not the same). The new feature WITH ORDINALITY in the upcoming Postgres 9.4 will allow cleaner syntax. Consider this related answer on dba.SE or similarly:
PostgreSQL unnest() with element number
I am assuming bit(7) as most effective data type for days. To work with it, I am converting it to an array in the first subquery sub.
Note the difference between ISODOW and DOW as field pattern for EXTRACT().
Updated question
Your function looks good, except for this line:
CROSS JOIN holiday
Otherwise, if I take the bit-shifting route, I end up with a similar query:
WITH x AS (SELECT '2013-10-09'::date AS day) -- supply your date
,y AS (SELECT day, B'1000000' >> (EXTRACT(ISODOW FROM day)::int - 1) AS dow
FROM x)
SELECT c.company_name, y.day + open_time AT TIME ZONE c.timezone AS open_ts
, y.day + close_time AT TIME ZONE c.timezone AS close_ts
FROM hours h
JOIN company c USING (company_id)
CROSS JOIN y
WHERE h.days & y.dow = y.dow;
AND NOT EXISTS ...
EXTRACT(ISODOW FROM requested_date)::int is just a faster equivalent of to_char(requested_date,'ID')::int
"Pass" day in WHERE clause?
To make that work you would have to generate a huge temporary table covering all possible days before selecting rows for the day in the WHERE clause. Possible (I would employ generate_series()), but very expensive.
My answer to your first draft is a smaller version of this: I expand all rows only for a pattern week before selecting the day matching the date in the WHERE clause. The tricky part is to display timestamps built from the input in the WHERE clause. Not possible. You are back to the huge table covering all days. Unless you have only few companies and a decently small date range, I would not go there.
This is built off the previous answers.
The sample data:
CREATE temp TABLE company (company_id int, company text);
INSERT INTO company VALUES
(1, 'Acme, Inc.')
,(2, 'Amalgamated');
CREATE temp TABLE hours(company_id int, days bit(7), open_time time, close_time time);
INSERT INTO hours VALUES
(1, '1111100', '08:30:00', '17:00:00')
,(2, '0000010', '09:00:00', '12:30:00');
create temp table holidays(company_id int, month_of_year int, day_of_month int);
insert into holidays values
(1, 1, 1),
(2, 1, 1),
(2, 1, 12) -- this was a saturday in 2013
;
First, just matching a date's day of week against the hours table's day of week, using the logic you provided:
select *
from company a
left join hours b
on a.company_id = b.company_id
left join holidays c
on b.company_id = c.company_id
where (b.days & (B'1000000' >> (to_char(current_date,'ID')::int -1)))
= (B'1000000' >> (to_char(current_date,'ID')::int -1))
;
Postgres lets you create custom operators to simplify expressions like in that where clause, so you might want an operator that matches the day of week between a bit string and a date. First the function that performs the test:
CREATE FUNCTION match_day_of_week(bit, date)
RETURNS boolean
AS $$
select ($1 & (B'1000000' >> (to_char($2,'ID')::int -1))) = (B'1000000' >> (to_char($2,'ID')::int -1))
$$
LANGUAGE sql IMMUTABLE STRICT;
You could stop there make in your where clause look something like "where match_day_of_week(days, some-date)". The custom operator makes this look a little prettier:
CREATE OPERATOR == (
leftarg = bit,
rightarg = date,
procedure = match_day_of_week
);
Now you've got syntax sugar to simplify that predicate. Here I've also added in the next test (that the month_of_year and day_of_month of a holiday don't correspond with the supplied date):
select *
from company a
left join hours b
on a.company_id = b.company_id
left join holidays c
on b.company_id = c.company_id
where b.days == current_date
and extract(month from current_date) != month_of_year
and extract(day from current_date) != day_of_month
;
For simplicity I start by adding an extra type (another awesome postgres feature) to encapsulate the month and day of the holiday.
create type month_day as (month_of_year int, day_of_month int);
Now repeat the process above to make another custom operator.
CREATE FUNCTION match_day_of_month(month_day, date)
RETURNS boolean
AS $$
select extract(month from $2) = $1.month_of_year
and extract(day from $2) = $1.day_of_month
$$
LANGUAGE sql IMMUTABLE STRICT;
CREATE OPERATOR == (
leftarg = month_day,
rightarg = date,
procedure = match_day_of_month
);
Finally, the original query is reduced to this:
select *
from company a
left join hours b
on a.company_id = b.company_id
left join holidays c
on b.company_id = c.company_id
where b.days == current_date
and not ((c.month_of_year, c.day_of_month)::month_day == current_date)
;
Reducing that down to a view looks like this:
create view x
as
select b.days,
(c.month_of_year, c.day_of_month)::month_day as holiday,
a.company_id,
b.open_time,
b.close_time
from company a
left join hours b
on a.company_id = b.company_id
left join holidays c
on b.company_id = c.company_id
;
And you could use that like this:
select company_id, open_time, close_time
from x
where days == current_date
and not (holiday == current_date)
;
Edit: You'll need to work on this logic a bit, by the way - this was more about showing the idea of how to do it with custom operators. For starters, if a company has multiple holidays defined you'll likely get multiple results back for that company.
I posted a similar response on PostgreSQL mailing list. Basically, avoiding the use of a function-invocation API in this situation is likely a foolish decision. The function call is the best API for this use-case. If you have a concrete scenario that you need to support where a function will not work then please provide that and maybe that scenario can be solved without having to compromise the PostgreSQL API. All your comments so far are about planning for an unknown future that very well may never come to be.
I have the following table:
+-----------+-----------+------------+----------+
| id | user_id | start_date | end_date |
| (integer) | (integer) | (date) | (date) |
+-----------+-----------+------------+----------+
Fields start_date and end_date are holding date values like YYYY-MM-DD.
An entry from this table can look like this: (1, 120, 2012-04-09, 2012-04-13).
I have to write a query that can fetch all the results matching a certain period.
The problem is that if I want to fetch results from 2012-01-01 to 2012-04-12, I get 0 results even though there is an entry with start_date = "2012-04-09" and end_date = "2012-04-13".
SELECT *
FROM mytable
WHERE (start_date, end_date) OVERLAPS ('2012-01-01'::DATE, '2012-04-12'::DATE);
Datetime functions is the relevant section in the docs.
Assuming you want all "overlapping" time periods, i.e. all that have at least one day in common.
Try to envision time periods on a straight time line and move them around before your eyes and you will see the necessary conditions.
SELECT *
FROM tbl
WHERE start_date <= '2012-04-12'::date
AND end_date >= '2012-01-01'::date;
This is sometimes faster for me than OVERLAPS - which is the other good way to do it (as #Marco already provided).
Note the subtle difference. The manual:
OVERLAPS automatically takes the earlier value of the pair as the
start. Each time period is considered to represent the half-open
interval start <= time < end, unless start and end are equal in which
case it represents that single time instant. This means for instance
that two time periods with only an endpoint in common do not overlap.
Bold emphasis mine.
Performance
For big tables the right index can help performance (a lot).
CREATE INDEX tbl_date_inverse_idx ON tbl(start_date, end_date DESC);
Possibly with another (leading) index column if you have additional selective conditions.
Note the inverse order of the two columns. See:
Optimizing queries on a range of timestamps (two columns)
just had the same question, and answered this way, if this could help.
select *
from table
where start_date between '2012-01-01' and '2012-04-13'
or end_date between '2012-01-01' and '2012-04-13'
To have a query working in any locale settings, consider formatting the date yourself:
SELECT *
FROM testbed
WHERE start_date >= to_date('2012-01-01','YYYY-MM-DD')
AND end_date <= to_date('2012-04-13','YYYY-MM-DD');
Looking at the dates for which it doesn't work -- those where the day is less than or equal to 12 -- I'm wondering whether it's parsing the dates as being in YYYY-DD-MM format?
You have to use the date part fetching method:
SELECT * FROM testbed WHERE start_date ::date >= to_date('2012-09-08' ,'YYYY-MM-DD') and date::date <= to_date('2012-10-09' ,'YYYY-MM-DD')
No offense but to check for performance of sql I executed some of the above mentioned solutiona pgsql.
Let me share you Statistics of top 3 solution approaches that I come across.
1) Took : 1.58 MS Avg
2) Took : 2.87 MS Avg
3) Took : 3.95 MS Avg
Now try this :
SELECT * FROM table WHERE DATE_TRUNC('day', date ) >= Start Date AND DATE_TRUNC('day', date ) <= End Date
Now this solution took : 1.61 Avg.
And best solution is 1st that suggested by marco-mariani
SELECT *
FROM ecs_table
WHERE (start_date, end_date) OVERLAPS ('2012-01-01'::DATE, '2012-04-12'::DATE + interval '1');
Let's try range data type.
--sample data.
begin;
create temp table tbl(id serial, user_id integer, start_date date, end_date date);
insert into tbl(user_id, start_date, end_date) values(1, '2012-04-09', '2012-04-13');
insert into tbl(user_id, start_date, end_date) values(1, '2012-01-09', '2012-04-12');
insert into tbl(user_id, start_date, end_date) values(1, '2012-02-09', '2012-04-10');
insert into tbl(user_id, start_date, end_date) values(1, '2012-04-09', '2012-04-10');
commit;
add a new daterange column.
begin;
alter table tbl add column tbl_period daterange ;
update tbl set tbl_period = daterange(start_date,end_date);
commit;
--now test time.
select * from tbl
where tbl_period && daterange('2012-04-10' ::date, '2012-04-12'::date);
returns:
id | user_id | start_date | end_date | tbl_period
----+---------+------------+------------+-------------------------
1 | 1 | 2012-04-09 | 2012-04-13 | [2012-04-09,2012-04-13)
2 | 1 | 2012-01-09 | 2012-04-12 | [2012-01-09,2012-04-12)
further reference: https://www.postgresql.org/docs/current/functions-range.html#RANGE-OPERATORS-TABLE
I have a table with sequential timestamps:
2011-03-17 10:31:19
2011-03-17 10:45:49
2011-03-17 10:47:49
...
I need to find the average time difference between each of these(there could be dozens) in seconds or whatever is easiest, I can work with it from there. So for example the above inter-arrival time for only the first two times would be 870 (14m 30s). For all three times it would be: (870 + 120)/2 = 445 (7m 25s).
A note, I am using postgreSQL 8.1.22 .
EDIT: The table I mention above is from a different query that is literally just a one-column list of timestamps
Not sure I understood your question completely, but this might be what you are looking for:
SELECT avg(difference)
FROM (
SELECT timestamp_col - lag(timestamp_col) over (order by timestamp_col) as difference
FROM your_table
) t
The inner query calculates the distance between each row and the preceding row. The result is an interval for each row in the table.
The outer query simply does an average over all differences.
i think u want to find avg(timestamptz).
my solution is avg(current - min value). but since result is interval, so add it to min value again.
SELECT avg(target_col - (select min(target_col) from your_table))
+ (select min(target_col) from your_table)
FROM your_table
If you cannot upgrade to a version of PG that supports window functions, you
may compute your table's sequential steps "the slow way."
Assuming your table is "tbl" and your timestamp column is "ts":
SELECT AVG(t1 - t0)
FROM (
-- All this silliness would be moot if we could use
-- `` lead(ts) over (order by ts) ''
SELECT tbl.ts AS t0,
next.ts AS t1
FROM tbl
CROSS JOIN
tbl next
WHERE next.ts = (
SELECT MIN(ts)
FROM tbl subquery
WHERE subquery.ts > tbl.ts
)
) derived;
But don't do that. Its performance will be terrible. Please do what
a_horse_with_no_name suggests, and use window functions.