I have an SQLite database (with Django as ORM) with a table of change events (an Account is assigned a new Strategy). I would like to convert it to a timeseries, to have on each day the Strategy the Account was following.
My table :
Expected output :
As showed, there can be more than 1 change per day. In this case I select the last change of the day, as the desired timeseries output must have only one value per day.
My question is similar to this one but in SQL, not BigQuery (but I'm not sure I understood the unnest part they propose). I have a working solution in Pandas with reindex and fillna, but I'm sure there is an elegant and simple solution in SQL (maybe even better with Django ORM).
You can use a RECURSIVE Common Table Expression to generate all dates between first and last and then join this generated table with your data to get the needed value for each day:
WITH RECURSIVE daterange(d) AS (
SELECT date(min(created_at)) from events
UNION ALL
SELECT date(d,'1 day') FROM daterange WHERE d<(select max(created_at) from events)
)
SELECT d, account_id, strategy_id
FROM daterange JOIN events
WHERE created_at = (select max(e.created_at) from events e where e.account_id=events.account_id and date(e.created_at) <= d)
GROUP BY account_id, d
ORDER BY account_id, d
date() function is used to convert a datetime value to a simple date, so you can use it to group your data by date.
date(d, '1 day') applies a modifier of +1 calendar day to d.
Here is an example with your data:
CREATE TABLE events (
created_at,
account_id,
strategy_id
);
insert into events
VALUES ('2022-10-07 12:53:53', 4801323843, 7),
('2022-10-07 08:10:07', 4801323843, 5),
('2022-10-07 15:00:45', 4801323843, 8),
('2022-10-10 13:01:16', 4801323843, 6);
WITH RECURSIVE daterange(d) AS (
SELECT date(min(created_at)) from events
UNION ALL
SELECT date(d,'1 day') FROM daterange WHERE d<(select max(created_at) from events)
)
SELECT d, account_id, strategy_id
FROM daterange JOIN events
WHERE created_at = (select max(e.created_at) from events e where e.account_id=events.account_id and date(e.created_at) <= d)
GROUP BY account_id, d
ORDER BY account_id, d
d
account_id
strategy_id
2022-10-07
4801323843
8
2022-10-08
4801323843
8
2022-10-09
4801323843
8
2022-10-10
4801323843
6
2022-10-11
4801323843
6
fiddle
The query could be slow with many rows. In that case create an index on the created_at column:
CREATE INDEX events_created_idx ON events(created_at);
My final version is the version proposed by #Andrea B., with just a slight improve in performance, merging only the rows that we need in the join, and therefore discarding the where clause.
I also converted the null to date('now')
Here is the final version I used :
with recursive daterange(day) as
(
select min(date(created_at)) from events
union all select date(day, '1 day') from daterange
where day < date('now')
),
events as (
select account_id, strategy_id, created_at as start_date,
case lead(created_at) over(partition by account_id order by created_at) is null
when True then datetime('now')
else lead(created_at) over(partition by account_id order by created_at)
end as end_date
from events
)
select * from daterange
join events on events.start_date<daterange.day and daterange.day<events.end_date
order by events.account_id
Hope this helps !
Related
I got a table like this:
group_id
start_date
end_date
19335
20220613
20220714
19527
20220620
20220719
19339
20220614
20220720
19436
20220616
20220715
20095
20220711
20220809
I am trying to retrieve data from another table that is partitioned, and data should be access with _TABLE_SUFFIX BETWEEN start_date AND end_date.
Each group_id contains different user_id within the period [start_date, end_date]. What I need is to retrieve data of users of a column/metric of the last 28D prior to the start_date of each group_id.
My idea is to:
Retrieve distinct user_id per group_id within the period [start_date, end_date]
Retrieve previous 28d metric data prior to the start date of each group_id
A snippet code on how to retrieve data from a single group_id is the following:
WITH users_per_group AS (
SELECT
users_metadata.user_id,
users_metadata.group_id,
FROM
`my_table_users_*` users_metadata
WHERE
_TABLE_SUFFIX BETWEEN '20220314' --start_date
AND '20220413' --end_date
AND experiment_id = 16709
GROUP BY
1,
2
)
SELECT
_TABLE_SUFFIX AS date,
user_id,
SUM(
COALESCE(metric, 0)
) AS metric,
FROM
users_per_group
JOIN `my_metric_table*` metric USING (user_id)
WHERE
_TABLE_SUFFIX BETWEEN FORMAT_TIMESTAMP(
'%Y%m%d',
TIMESTAMP_SUB(
PARSE_TIMESTAMP('%Y%m%d', '20220314'), --start_date
INTERVAL 28 DAY
)
) -- 28 days before it starts
AND FORMAT_TIMESTAMP(
'%Y%m%d',
TIMESTAMP_SUB(
PARSE_TIMESTAMP('%Y%m%d', '20220314'), --start_date
INTERVAL 1 DAY
)
) -- 1 day before it starts
GROUP BY
1,
2
ORDER BY
date ASC
Also, I want to avoid retrieving all data (considering all dates) from that metric, as the table is huge and it will take very long time to retrieve it.
Is there an easy way to retrieve the metric data of each user across groups and considering the previous 28 days to the start data of each group_id?
I can think of 2 approaches.
Join all the tables and then perform your query.
Create dynamic queries for each of your users.
Both approaches will require search_from and search_to to be available beforehand i.e you need to calculate each user's search range before you do anything.
EG:
WITH users_per_group AS (
SELECT
user_id, group_id
,DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 4 DAY)search_from
,DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 1 DAY)search_to
FROM TableName
)
Once you have this kind of table then you can use any of the mentioned approaches.
Since I don't have your data and don't know about your table names I am giving an example using a public dataset.
Approach 1
-- consider this your main table which contains user,grp,start_date,end_date
with maintable as (
select 'India' visit_from, '20161115' as start_date, '20161202' end_date
union all select 'Sweden' , '20161201', '20161202'
),
--then calculate search from-to date for every user and group
user_per_grp as(
select *, DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 4 DAY)search_from --change interval as per your need
,DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 1 DAY)search_to
from maintable
)
select visit_from,_TABLE_SUFFIX date,count(visitId) total_visits from
user_per_grp ug
left join `bigquery-public-data.google_analytics_sample.ga_sessions_*` as pub on pub.geoNetwork.country = ug.visit_from
where _TABLE_SUFFIX between format_date("%Y%m%d",ug.search_from) and format_date("%Y%m%d",ug.search_to)
group by 1,2
Approach 2
declare queries array<string> default [];
create temp table maintable as (
select 'India' visit_from, '20161115' as start_date, '20161202' end_date
union all select 'Sweden' , '20161201', '20161202'
);
create temp table user_per_grp as(
select *, DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 4 DAY)search_from
,DATE_SUB(parse_date("%Y%m%d", start_date), INTERVAL 1 DAY)search_to
from maintable
);
-- for each user create a seperate query here
FOR record IN (SELECT * from user_per_grp)
DO
set queries = queries || [format('select "%s" Visit_From,_TABLE_SUFFIX Date,count(visitId) total_visits from `bigquery-public-data.google_analytics_sample.ga_sessions_*` where _TABLE_SUFFIX between format_date("%%Y%%m%%d","%t") and format_date("%%Y%%m%%d","%t") and geoNetwork.country="%s" group by 1,2',record.visit_from,record.search_from,record.search_to,record.visit_from)];
--replace your query here.
END FOR;
--aggregating all the queries and executing it
execute immediate (select string_agg(query, ' union all ') from unnest(queries) query);
Here the 2nd approach processed much less data(~750 KB) than the 1st approach(~17 MB). But that might not be the same for your dataset as the date range may overlap for 2 users and that will lead to reading the same table twice.
If I have a table that has the following format:
purchase_time | user_id | items_purchased
The current query I'm doing is something like this:
SELECT user_id, date(purchase_time), sum(items_purchased)
from user_purchase_metrics
GROUP BY date(purchase_time), user_id;
I'm trying to create a query that will fill in 0 for purchases if there isn't an entry in that date for that given user. Is this possible?
Side stepping the valid concern by #xQbert about generating missing dates performance must always give way to necessity. Without a convenient calendar table generating the dates of interest is a necessity. Moreover, in this case the dates must be generated for each user_id. In the following this is done by generating each date with the distinct user_id from the user_purchase_metrics table. The result is then LEFT joined to the same table to sum the purchases and giving the desired 0 results for the missing dates. (see demo, for dates I just picked March):
with dates( user_id, idate ) as
( select user_id, d::date
from ( select distinct user_id
from user_purchase_metrics
) u
join generate_series( date '2021-03-01' --- start_date
, date '2021-03-31' --- end_date
, interval '1 day'
) gs(d)
on true
) -- select * from dates;
select d.user_id
, d.idate
, coalesce(sum(pm.items_purchased),0)
from dates d
left join user_purchase_metrics pm
on ( pm.user_id = d.user_id
and date(pm.purchase_time) = d.idate
)
group by d.user_id, d.idate
order by d.user_id, d.idate;
To parametrize the query can be embedded in a SQL function that returns a table. (Also in demo).
I have a dataset that has a list of users that are connected to the server at every 15 minutes, e.g.
May 7, 2020, 8:09 AM user1
May 7, 2020, 8:09 AM user2
...
May 7, 2020, 8:24 AM user1
May 7, 2020, 8:24 AM user3
...
And I'd like to get a number of active users for every day, e.g.
May 7, 2020 71
May 8, 2020 83
Now, the tricky part. An active user is defined if he/she has been connected 80% of the time or more across the last 7 days. This means that, if there are 672 15-minute intervals in a week (1440 / 15 x 7), then a user has to be displayed 538 (672 x 0.8) times.
My code so far is:
SELECT
DATE_TRUNC('week', ts) AS ts_week
,COUNT(DISTINCT user)
FROM activeusers
GROUP BY 1
Which only gives a list of unique users connected at every week.
July 13, 2020, 12:00 AM 435
July 20, 2020, 12:00 AM 267
But I'd like to implement the active user definition, as well as get the result for every day, not just Mondays.
The resulting special difficulty here is that users might qualify for days where they have no connections at all, if they were connected sufficiently during the previous 6 days.
That makes it harder to use a window function. Aggregating in a LATERAL subquery is the obvious alternative:
WITH daily AS ( -- ① granulate daily
SELECT ts::date AS the_day
, "user"
, count(*)::int AS daily_cons
FROM activeusers
GROUP BY 1, 2
)
SELECT d.the_day, count("user") AS active_users
FROM ( -- ② time frame
SELECT generate_series (timestamp '2020-07-01'
, LOCALTIMESTAMP
, interval '1 day')::date
) d(the_day)
LEFT JOIN LATERAL (
SELECT "user"
FROM daily d
WHERE d.the_day >= d.the_day - 6
AND d.the_day <= d.the_day
GROUP BY "user"
HAVING sum(daily_cons) >= 538 -- ③
) sum7 ON true
ORDER BY d.the_day;
① The CTE daily is optional, but starting with daily aggregates should help performance a lot.
② You'll have to define the time frame somehow. I chose the current year. Replace with your choice. To work with the total range present in your table, use instead:
SELECT generate_series (min(the_day)::timestamp
, max(the_day)::timestamp
, interval '1 day')::date AS the_day
FROM daily
Consider basics here:
Generating time series between two dates in PostgreSQL
This also overcomes the "special difficulty" mentioned above.
③ The condition in the HAVING clause eliminates all rows with insufficient connections over the last 7 days (including "today").
Related:
Cumulative sum of values by month, filling in for missing months
Best way to count records by arbitrary time intervals in Rails+Postgres
Total Number of Records per Week
Aside:
You wouldn't really use the reserved word "user" as identifier.
I have done something similar to this for device monitoring reports. I was never able to come up with a solution that does not involve building a calendar and cross joining it to a distinct list of devices (user values in your case).
This deliberately verbose query builds the cross join, gets active counts per user and ddate, performs the running sum() over seven days, and then counts the number of users on a given ddate that had 538 or more actives in the seven days ending on that ddate.
with drange as (
select min(ts) as start_ts, max(ts) as end_ts
from activeusers
), alldates as (
select (start_ts + make_interval(days := x))::date as ddate
from drange
cross join generate_series(0, date_part('day', end_ts - start_ts)::int) as gs(x)
), user_dates as (
select ddate, "user"
from alldates
cross join (select distinct "user" from activeusers) u
), user_date_counts as (
select u.ddate, u."user",
sum(case when a.user is null then 0 else 1 end) as actives
from user_dates u
left join activeusers a
on a."user" = u."user"
and a.ts::date = u.ddate
group by u.ddate, u."user"
), running_window as (
select ddate, "user",
sum(actives) over (partition by user
order by ddate
rows between 6 preceding
and current row) seven_days
from user_date_counts
), flag_active as (
select ddate, "user",
seven_days >= 538 as is_active
from running_window
)
select ddate, count(*) as active_users
from flag_active
where is_active
group by ddate
;
Because you want the active user for every day but are determining by week, I think you might use a CROSS APPLY to duplicate the count for every day. The FROM part of the query will give you the days and the users, the CROSS APPLY will limit to active users. You can specify in the final WHERE what users or dates you want.
SELECT users.UserName, users.LogDate
FROM (
SELECT UserName, CAST(ts AS DATE) AS LogDate
FROM activeusers
GROUP BY CAST(ts AS DATE)
) AS users
CROSS APPLY (
SELECT UserName, COUNT(1)
FROM activeusers AS a
WHERE a.UserName = users.UserName AND CAST(ts AS DATE) BETWEEN DATEADD(WEEK, -1, LogDate) AND LogDate
GROUP BY UserName
HAVING COUNT(1) >= 538
) AS activeUsers
WHERE users.LogDate > '2020-01-01' AND users.UserName = 'user1'
This is SQL Server, you may need to make revisions for PostgreSQL. CROSS APPLY may translate to LEFT JOIN LATERAL (...) ON true.
Having trouble putting together a query to pull the aggregate values of a give timestamp and the timestamp before it. Given the following schema:
name TEXT,
ts TIMESTAMP,
X NUMERIC,
Y NUMERIC
where there are gaps in the ts column due to gaps in data, I'm trying to construct a query to produce
name,
date_trunc('day' q1.ts),
avg(q1.X),
sum(q2.Y),
date_trunc('day', q2.ts),
avg(q2.X),
sum(q2.Y)
The first half is straightforward:
SELECT q1.name, date_trunc('day', q1.ts), avg(q1.X), sum(q1.Y)
FROM data as q1
GROUP BY 1, 2
ORDER BY 1, 2;
But not sure how to generate the relation to find the "day" before for each row. I'm trying to work an inner join like this:
SELECT q1.name, q1.day, q1.avg, q1.sum, q2.day, q2.avg, q2.sum
FROM (
SELECT name, date_trunc('day', ts) AS day, avg(X) AS avg, sum(Y) as sum
FROM data
GROUP BY 1,2
ORDER BY 1,2
) q1 INNER JOIN (
SELECT name, date_trunc('day', ts) AS day, avg(X) AS avg, sum(Y) as sum
FROM data
GROUP BY 1,2
ORDER BY 1,2
) q2 ON (
q1.name = q2.name
AND q2.day = q1.day - interval '1 day'
);
The problem with this is, it doesn't cover the cases when the next "day" is more than 1 day before the current day.
The special difficulty here is that you need to number days after aggregating rows. You can do this in a single query level with the window function row_number(), since window functions are applied after aggregation by GROUP BY.
Also, use a CTE to avoid executing the same subquery multiple times:
WITH q AS (
SELECT name, ts::date AS day
,avg(x) AS avg_x, sum(y) AS sum_y
,row_number() OVER (PARTITION BY name ORDER BY ts::date) AS rn
FROM data
GROUP BY 1,2
)
SELECT q1.name, q1.day, q1.avg_x, q1.sum_y
,q2.day AS day2, q2.avg_x AS avg_x2, q2.sum_y AS sum_y2
FROM q q1
LEFT JOIN q q2 ON q1.name = q2.name
AND q1.rn = q2.rn + 1
ORDER BY 1,2;
Using the simpler cast to date (ts::date) instead of date_trunc('day', ts) to get "days".
LEFT [OUTER] JOIN (as opposed to [INNER] JOIN) is instrumental to preserve the corner case of the first row, where there is no previous day.
And ORDER BY should be applied to the outer query.
The question isn't crystal clear, but it sounds like you're actually trying to fill gaps while keeping track of leading/lagging rows.
To fill the gaps, look into generate_series() and left join it with your table:
select d
from generate_series(timestamp '2013-12-01', timestamp '2013-12-31', interval '1 day') d;
http://www.postgresql.org/docs/current/static/functions-srf.html
For previous and next row values, look into lead() and lag() window functions:
select date_trunc('day', ts) as curr_row_day,
lag(date_trunc('day', ts)) over w as prev_row_day
from data
window w as (order by ts)
http://www.postgresql.org/docs/current/static/tutorial-window.html
I want to count ID's per month using generate_series(). This query works in PostgreSQL 9.1:
SELECT (to_char(serie,'yyyy-mm')) AS year, sum(amount)::int AS eintraege FROM (
SELECT
COUNT(mytable.id) as amount,
generate_series::date as serie
FROM mytable
RIGHT JOIN generate_series(
(SELECT min(date_from) FROM mytable)::date,
(SELECT max(date_from) FROM mytable)::date,
interval '1 day') ON generate_series = date(date_from)
WHERE version = 1
GROUP BY generate_series
) AS foo
GROUP BY Year
ORDER BY Year ASC;
This is my output:
"2006-12" | 4
"2007-02" | 1
"2007-03" | 1
But what I want to get is this output ('0' value in January):
"2006-12" | 4
"2007-01" | 0
"2007-02" | 1
"2007-03" | 1
Months without id should be listed nevertheless.
Any ideas how to solve this?
Sample data:
drop table if exists mytable;
create table mytable(id bigint, version smallint, date_from timestamp);
insert into mytable(id, version, date_from) values
(4084036, 1, '2006-12-22 22:46:35'),
(4084938, 1, '2006-12-23 16:19:13'),
(4084938, 2, '2006-12-23 16:20:23'),
(4084939, 1, '2006-12-23 16:29:14'),
(4084954, 1, '2006-12-23 16:28:28'),
(4250653, 1, '2007-02-12 21:58:53'),
(4250657, 1, '2007-03-12 21:58:53')
;
Untangled, simplified and fixed, it might look like this:
SELECT to_char(s.tag,'yyyy-mm') AS monat
, count(t.id) AS eintraege
FROM (
SELECT generate_series(min(date_from)::date
, max(date_from)::date
, interval '1 day'
)::date AS tag
FROM mytable t
) s
LEFT JOIN mytable t ON t.date_from::date = s.tag AND t.version = 1
GROUP BY 1
ORDER BY 1;
db<>fiddle here
Among all the noise, misleading identifiers and unconventional format the actual problem was hidden here:
WHERE version = 1
You made correct use of RIGHT [OUTER] JOIN. But adding a WHERE clause that requires an existing row from mytable converts the RIGHT [OUTER] JOIN to an [INNER] JOIN effectively.
Move that filter into the JOIN condition to make it work.
I simplified some other things while being at it.
Better, yet
SELECT to_char(mon, 'yyyy-mm') AS monat
, COALESCE(t.ct, 0) AS eintraege
FROM (
SELECT date_trunc('month', date_from)::date AS mon
, count(*) AS ct
FROM mytable
WHERE version = 1
GROUP BY 1
) t
RIGHT JOIN (
SELECT generate_series(date_trunc('month', min(date_from))
, max(date_from)
, interval '1 mon')::date
FROM mytable
) m(mon) USING (mon)
ORDER BY mon;
db<>fiddle here
It's much cheaper to aggregate first and join later - joining one row per month instead of one row per day.
It's cheaper to base GROUP BY and ORDER BY on the date value instead of the rendered text.
count(*) is a bit faster than count(id), while equivalent in this query.
generate_series() is a bit faster and safer when based on timestamp instead of date. See:
Generating time series between two dates in PostgreSQL