Finding multiple consecutive dates (datetime) in Ruby on Rails / Postgresql - sql

How can we find X consecutive dates (using by hour) that meet a condition?
EDIT: here is the SQL fiddle http://sqlfiddle.com/#!17/44928/1
Example:
Find 3 consecutive dates where aa < 2 and bb < 6 and cc < 7
Given this table called weather:
timestamp
aa
bb
cc
01/01/2000 00:00
1
5
5
01/01/2000 01:00
5
5
5
01/01/2000 02:00
1
5
5
01/01/2000 03:00
1
5
5
01/01/2000 04:00
1
5
5
01/01/2000 05:00
1
5
5
Answer should return the 3 records from 02:00, 03:00, 04:00.
How can we do this in Ruby on Rails - or directly in SQL if that is better?
I started working on a method based on this answer:
Detect consecutive dates ranges using SQL
def consecutive_dates
the_query = "WITH t AS (
SELECT timestamp d,ROW_NUMBER() OVER(ORDER BY timestamp) i
FROM #d
GROUP BY timestamp
)
SELECT MIN(d),MAX(d)
FROM t
GROUP BY DATEDIFF(hour,i,d)"
ActiveRecord::Base.connection.execute(the_query)
end
But I was unable to get it working.

Assuming that you have one row every hour, then an easy way to get the first hour where this occurs uses lead():
select t.*
from (select t.*,
lead(timestamp, 2) over (order by timestamp) as timestamp_2
from t
where aa < 2 and bb < 6 and cc < 7
) t
where timestamp_2 = timestamp + interval '2 hour';
This filters on the conditions and looks at the rows two rows ahead. If it is two hours ahead, then three rows in a row match the conditions. Note: The above will return both 2020-01-01 02:00 and 2020-01-01 03:00.
From your question you only seem to want the earliest. To handle that, use lag() as well:
select t.*
from (select t.*,
lag(timestamp) over (order by timestamp) as prev_timestamp
lead(timestamp, 2) over (order by timestamp) as timestamp_2
from t
where aa < 2 and bb < 6 and cc < 7
) t
where timestamp_2 = timestamp + interval '2 hour' and
(prev_timestamp is null or prev_timestamp < timestamp - interval '1' hour);
You can generate the additional hours use generate_series() if you really need the original rows:
select t.timestamp + n.n * interval '1 hour', aa, bb, cc
from (select t.*,
lead(timestamp, 2) over (order by timestamp) as timestamp_2
from t
where aa < 2 and bb < 6 and cc < 7
) t cross join lateral
generate_series(0, 2) n
where timestamp_2 = timestamp + interval '2 hour';
Your data seems to have precise timestamps based on the question, so the timestamp equalities will work. If the real data has more fuzziness, then the queries can be tweaked to take this into account.

)This is a gaps-and-islands problem. Islands are adjacent records that match the condition, and you want islands that are at least 3 records long.
Here is one approach that uses a window count that increments every time value that does not match the condition is met to define the groups. We can then count how many rows there are in each group, and use that information to filter.
select *
from (
select t.*, count(*) over(partition by a, grp) cnt
from (
select t.*,
count(*) filter(where b <= 4) over(partition by a order by timestamp) grp
from mytable t
) t
) t
where cnt >= 3

Related

How do I window Postgre SQL rows based on row data?

I have a table that has a timestamp column and some data columns. Given a interval length T (say 30minutes) I want to partition the table into 'sessions'. Two adjacent rows (when sorted by the timestamp) are in the same 'session' if the difference of the timestamp values is less than T. If the difference is more than T then there is a break in sessions. For example, the table below has two gaps of more than T that split the sessions. How do I generate the session column with SQL?
row
timestamp
session
1
18:00
1
2
18:02
1
3
18:04
1
4
18:30
1
5
19:10
2
6
19:20
2
7
20:20
3
You can use lag() on the timestamp to measure the difference and then a cumulative sum to calculate the session:
select t.*,
sum(case when prev_timestamp > timetamp - interval '30 minute' then 0 else 1 end) over
(order by timestamp) as session
from (select t.*,
lag(timestamp) over (order by timestamp) as prev_timestamp
from t
) t;
Or, you could use count() with filter in Postgres:
select t.*,
1 + count(*) filter (where prev_timestamp < timestamp - interval '30 minute') over (order by timestamp) as session
from (select t.*,
lag(timestamp) over (order by timestamp) as prev_timestamp
from t
) t;

SQL 30 day active user query

I have a table of users and how many events they fired on a given date:
DATE
USERID
EVENTS
2021-08-27
1
5
2021-07-25
1
7
2021-07-23
2
3
2021-07-20
3
9
2021-06-22
1
9
2021-05-05
1
4
2021-05-05
2
2
2021-05-05
3
6
2021-05-05
4
8
2021-05-05
5
1
I want to create a table showing number of active users for each date with active user being defined as someone who has fired an event on the given date or in any of the preceding 30 days.
DATE
ACTIVE_USERS
2021-08-27
1
2021-07-25
3
2021-07-23
2
2021-07-20
2
2021-06-22
1
2021-05-05
5
I tried the following query which returned only the users who were active on the specified date:
SELECT COUNT(DISTINCT USERID), DATE
FROM table
WHERE DATE >= (CURRENT_DATE() - interval '30 days')
GROUP BY 2 ORDER BY 2 DESC;
I also tried using a window function with rows between but seems to end up getting the same result:
SELECT
DATE,
SUM(ACTIVE_USERS) AS ACTIVE_USERS
FROM
(
SELECT
DATE,
CASE
WHEN SUM(EVENTS) OVER (PARTITION BY USERID ORDER BY DATE ROWS BETWEEN 30 PRECEDING AND CURRENT ROW) >= 1 THEN 1
ELSE 0
END AS ACTIVE_USERS
FROM table
)
GROUP BY 1
ORDER BY 1
I'm using SQL:ANSI on Snowflake. Any suggestions would be much appreciated.
This is tricky to do as window functions -- because count(distinct) is not permitted. You can use a self-join:
select t1.date, count(distinct t2.userid)
from table t join
table t2
on t2.date <= t.date and
t2.date > t.date - interval '30 day'
group by t1.date;
However, that can be expensive. One solution is to "unpivot" the data. That is, do an incremental count per user of going "in" and "out" of active states and then do a cumulative sum:
with d as ( -- calculate the dates with "ins" and "outs"
select user, date, +1 as inc
from table
union all
select user, date + interval '30 day', -1 as inc
from table
),
d2 as ( -- accumulate to get the net actives per day
select date, user, sum(inc) as change_on_day,
sum(sum(inc)) over (partition by user order by date) as running_inc
from d
group by date, user
),
d3 as ( -- summarize into active periods
select user, min(date) as start_date, max(date) as end_date
from (select d2.*,
sum(case when running_inc = 0 then 1 else 0 end) over (partition by user order by date) as active_period
from d2
) d2
where running_inc > 0
group by user
)
select d.date, count(d3.user)
from (select distinct date from table) d left join
d3
on d.date >= start_date and d.date < end_date
group by d.date;

How to get values from the previous row?

I have a table like this:
ID
NUMBER
TIMESTAMP
1
1
05/28/2020 09:00:00
2
2
05/29/2020 10:00:00
3
1
05/31/2020 21:00:00
4
1
06/01/2020 21:00:00
And I want to show data like this:
ID
NUMBER
TIMESTAMP
RANGE
1
1
05/28/2020 09:00:00
0 Days
2
2
05/29/2020 10:00:00
0 Days
3
1
05/31/2020 21:00:00
3,5 Days
4
1
06/01/2020 21:00:00
1 Days
So it takes 3,5 Days to process the number 1 process.
I tried:
select a.id, a.number, a.timestamp, ((a.timestamp-b.timestamp)/24) as days
from my_table a
left join (select number,timestamp from my_table) b
on a.number=b.number
Didn't work as expected. How to do this properly?
Use the window function lag().
With standard interval output:
SELECT *, timestamp - lag(timestamp) OVER(PARTITION BY number ORDER BY id)
FROM tbl
ORDER BY id;
If you need decimal number like in your example:
SELECT *, round((extract(epoch FROM timestamp - lag(timestamp) OVER(PARTITION BY number ORDER BY id)) / 86400)::numeric, 2) || ' days'
FROM tbl
ORDER BY id;
If you also need to display '0 days' instead of NULL like in your example:
SELECT *, COALESCE(round((extract(epoch FROM timestamp - lag(timestamp) OVER(PARTITION BY number ORDER BY id)) / 86400)::numeric, 2), 0) || ' days'
FROM tbl
ORDER BY id;
db<>fiddle here

sum values based on 7-day cycle in SQL Oracle

I have dates and some value, I would like to sum values within 7-day cycle starting from the first date.
date value
01-01-2021 1
02-01-2021 1
05-01-2021 1
07-01-2021 1
10-01-2021 1
12-01-2021 1
13-01-2021 1
16-01-2021 1
18-01-2021 1
22-01-2021 1
23-01-2021 1
30-01-2021 1
this is my input data with 4 groups to see what groups will create the 7-day cycle.
It should start with first date and sum all values within 7 days after first date included.
then start a new group with next day plus anothe 7 days, 10-01 till 17-01 and then again new group from 18-01 till 25-01 and so on.
so the output will be
group1 4
group2 4
group3 3
group4 1
with match_recognize would be easy current_day < first_day + 7 as a condition for the pattern but please don't use match_recognize clause as solution !!!
One approach is a recursive CTE:
with tt as (
select dte, value, row_number() over (order by dte) as seqnum
from t
),
cte (dte, value, seqnum, firstdte) as (
select tt.dte, tt.value, tt.seqnum, tt.dte
from tt
where seqnum = 1
union all
select tt.dte, tt.value, tt.seqnum,
(case when tt.dte < cte.firstdte + interval '7' day then cte.firstdte else tt.dte end)
from cte join
tt
on tt.seqnum = cte.seqnum + 1
)
select firstdte, sum(value)
from cte
group by firstdte
order by firstdte;
This identifies the groups by the first date. You can use row_number() over (order by firstdte) if you want a number.
Here is a db<>fiddle.

Add Missing monthly dates in a timeseries data in Postgresql

I have monthly time series data in table where dates are as a last day of month. Some of the dates are missing in the data. I want to insert those dates and put zero value for other attributes.
Table is as follows:
id report_date price
1 2015-01-31 40
1 2015-02-28 56
1 2015-04-30 34
2 2014-05-31 45
2 2014-08-31 47
I want to convert this table to
id report_date price
1 2015-01-31 40
1 2015-02-28 56
1 2015-03-31 0
1 2015-04-30 34
2 2014-05-31 45
2 2014-06-30 0
2 2014-07-31 0
2 2014-08-31 47
Is there any way we can do this in Postgresql?
Currently we are doing this in Python. As our data is growing day by day and its not efficient to handle I/O just for one task.
Thank you
You can do this using generate_series() to generate the dates and then left join to bring in the values:
with m as (
select id, min(report_date) as minrd, max(report_date) as maxrd
from t
group by id
)
select m.id, m.report_date, coalesce(t.price, 0) as price
from (select m.*, generate_series(minrd, maxrd, interval '1' month) as report_date
from m
) m left join
t
on m.report_date = t.report_date;
EDIT:
Turns out that the above doesn't quite work, because adding months to the end of month doesn't keep the last day of the month.
This is easily fixed:
with t as (
select 1 as id, date '2012-01-31' as report_date, 10 as price union all
select 1 as id, date '2012-04-30', 20
), m as (
select id, min(report_date) - interval '1 day' as minrd, max(report_date) - interval '1 day' as maxrd
from t
group by id
)
select m.id, m.report_date, coalesce(t.price, 0) as price
from (select m.*, generate_series(minrd, maxrd, interval '1' month) + interval '1 day' as report_date
from m
) m left join
t
on m.report_date = t.report_date;
The first CTE is just to generate sample data.
This is a slight improvement over Gordon's query which fails to get the last date of a month in some cases.
Essentially you generate all the month end dates between the min and max date for each id (using generate_series) and left join on this generated table to show the missing dates with 0 price.
with minmax as (
select id, min(report_date) as mindt, max(report_date) as maxdt
from t
group by id
)
select m.id, m.report_date, coalesce(t.price, 0) as price
from (select *,
generate_series(date_trunc('MONTH',mindt+interval '1' day),
date_trunc('MONTH',maxdt+interval '1' day),
interval '1' month) - interval '1 day' as report_date
from minmax
) m
left join t on m.report_date = t.report_date
Sample Demo