How to add records for each user based on another existing row in BigQuery? - sql

Posting here in case someone with more knowledge than may be able to help me with some direction.
I have a table like this:
| Row | date |user id | score |
-----------------------------------
| 1 | 20201120 | 1 | 26 |
-----------------------------------
| 2 | 20201121 | 1 | 14 |
-----------------------------------
| 3 | 20201125 | 1 | 0 |
-----------------------------------
| 4 | 20201114 | 2 | 32 |
-----------------------------------
| 5 | 20201116 | 2 | 0 |
-----------------------------------
| 6 | 20201120 | 2 | 23 |
-----------------------------------
However, from this, I need to have a record for each user for each day where if a day is missing for a user, then the last score recorded should be maintained then I would have something like this:
| Row | date |user id | score |
-----------------------------------
| 1 | 20201120 | 1 | 26 |
-----------------------------------
| 2 | 20201121 | 1 | 14 |
-----------------------------------
| 3 | 20201122 | 1 | 14 |
-----------------------------------
| 4 | 20201123 | 1 | 14 |
-----------------------------------
| 5 | 20201124 | 1 | 14 |
-----------------------------------
| 6 | 20201125 | 1 | 0 |
-----------------------------------
| 7 | 20201114 | 2 | 32 |
-----------------------------------
| 8 | 20201115 | 2 | 32 |
-----------------------------------
| 9 | 20201116 | 2 | 0 |
-----------------------------------
| 10 | 20201117 | 2 | 0 |
-----------------------------------
| 11 | 20201118 | 2 | 0 |
-----------------------------------
| 12 | 20201119 | 2 | 0 |
-----------------------------------
| 13 | 20201120 | 2 | 23 |
-----------------------------------
I'm trying to to this in BigQuery using StandardSQL. I have an idea of how to keep the same score across following empty dates, but I really don't know how to add new rows for missing dates for each user. Also, just to keep in mind, this example only has 2 users, but in my data I have more than 1500.
My end goal would be to show something like the average of the score per day. For background, because of our logic, if the score wasn't recorded in a specific day, this means that the user is still in the last score recorded which is why I need a score for every user every day.
I'd really appreciate any help I could get! I've been trying different options without success

Below is for BigQuery Standard SQL
#standardSQL
select date, user_id,
last_value(score ignore nulls) over(partition by user_id order by date) as score
from (
select user_id, format_date('%Y%m%d', day) date,
from (
select user_id, min(parse_date('%Y%m%d', date)) min_date, max(parse_date('%Y%m%d', date)) max_date
from `project.dataset.table`
group by user_id
) a, unnest(generate_date_array(min_date, max_date)) day
)
left join `project.dataset.table` b
using(date, user_id)
-- order by user_id, date
if applied to sample data from your question - output is

One option uses generate_date_array() to create the series of dates of each user, then brings the table with a left join.
select d.date, d.user_id,
last_value(t.score ignore nulls) over(partition by d.user_id order by d.date) as score
from (
select t.user_id, d.date
from mytable t
cross join unnest(generate_date_array(min(date), max(date), interval 1 day)) d(date)
group by t.user_id
) d
left join mytable t on t.user_id = d.user_id and t.date = d.date

I think the most efficient method is to use generate_date_array() but in a very particular way:
with t as (
select t.*,
date_add(lead(date) over (partition by user_id order by date), interval -1 day) as next_date
from t
)
select row_number() over (order by t.user_id, dte) as id,
t.user_id, dte, t.score
from t cross join join
unnest(generate_date_array(date,
coalesce(next_date, date)
interval 1 day
)
) dte;

Related

Difference in dates when actions are taken multiple times

I have the following table:
Table (History h)
| Source ID | Action | Created Date |
| 1 | Filing Rejected | 1/3/2023 |
| 2 | Filing Rejected | 1/4/2023 |
| 1 | Filing Resubmitted | 1/5/2023 |
| 3 | Filing Rejected | 1/5/2023 |
| 2 | Filing Resubmitted | 1/6/2023 |
| 1 | Filing Rejected | 1/7/2023 |
| 3 | Filing Resubmitted | 1/8/2023 |
| 1 | Filing Resubmitted | 1/9/2023 |
The results that I want are:
|Source ID | Rejected Date | Resubmitted Date | Difference |
| 1 | 1/3/2023 | 1/5/2023 | 2 |
| 1 | 1/7/2023 | 1/9/2023 | 2 |
| 2 | 1/4/2023 | 1/6/2023 | 2 |
| 3 | 1/5/2023 | 1/8/2023 | 3 |
My current query language is:
SELECT h1.Source_ID, min(CONVERT(varchar,h1.CREATED_DATE,101)) AS 'Rejected Date',
min(CONVERT(varchar,h2.Created_Date,101)) AS 'Resubmitted Date',
DATEDIFF(HOUR, h1.Created_Date, min(h2.Created_Date)) / 24 Difference
FROM History h1 INNER JOIN History h2
ON h2.Source_ID = h1.Source_ID AND h2.Created_Date > h1.Created_Date
WHERE (h1.Created_Date >= '2023-01-01 00:00:00.000' AND h1.Created_Date <= '2023-01-31 23:59:59.000')
AND ((h1.CHANGE_VALUE_TO = 'Filing Rejected' AND h2.CHANGE_VALUE_TO = 'Filing Resubmitted'))
GROUP BY h1.Source_ID, h1.Created_Date,h2.Created_Date
ORDER BY 'Rejected Date' ASC;
The results I get are:
|Source ID | Rejected Date | Resubmitted Date | Difference |
| 1 | 1/3/2023 | 1/5/2023 | 2 |
| 1 * | 1/3/2023 | 1/9/2023 | 6 |
| 1 | 1/7/2023 | 1/9/2023 | 2 |
| 2 | 1/4/2023 | 1/6/2023 | 2 |
| 3 | 1/5/2023 | 1/8/2023 | 3 |
So there is one row that is showing up that should not be. I have marked it with an asterisk.
I just want the difference from the first rejection to the first resubmission, the second rejection to the second rejection.
Any help, another idea on how to do it, anything really, is greatly appreciated.
If events always properly interleave, one approach uses row_number() and conditional aggregation:
select source_id,
max(case when action = 'Filing Rejected' then created_date end) rejected_date,
max(case when action = 'Filing Resubmitted' then created_date end) resubmitted_date
from (
select h.*,
row_number() over(partition by source_id, action order by created_date) rn
from history h
where created_date >= '2023-01-01' and created_date < '2023-02-01'
) h
group by source_id, rn
order by source_id, rn
This will not work if your data has consecutive rejections or resubmissions.
As for the date difference, we can add another layer so we don’t need to type the conditional expressions twice:
select h.*, datediff(day, rejected_date, resubmitted_date) diff_in_days
from (
select source_id,
max(case when action = 'Filing Rejected' then created_date end) rejected_date,
max(case when action = 'Filing Resubmitted' then created_date end) resubmitted_date
from (
select h.*,
row_number() over(partition by source_id, action order by created_date) rn
from history h
where created_date >= '2023-01-01' and created_date < '2023-02-01'
) h
group by source_id, rn
) h
order by source_id, rejected_date

30 day rolling count of distinct IDs

So after looking at what seems to be a common question being asked and not being able to get any solution to work for me, I decided I should ask for myself.
I have a data set with two columns: session_start_time, uid
I am trying to generate a rolling 30 day tally of unique sessions
It is simple enough to query for the number of unique uids per day:
SELECT
COUNT(DISTINCT(uid))
FROM segment_clean.users_sessions
WHERE session_start_time >= CURRENT_DATE - interval '30 days'
it is also relatively simple to calculate the daily unique uids over a date range.
SELECT
DATE_TRUNC('day',session_start_time) AS "date"
,COUNT(DISTINCT uid) AS "count"
FROM segment_clean.users_sessions
WHERE session_start_time >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY date(session_start_time)
I then I tried several ways to do a rolling 30 day unique count over a time interval
SELECT
DATE(session_start_time) AS "running30day"
,COUNT(distinct(
case when date(session_start_time) >= running30day - interval '30 days'
AND date(session_start_time) <= running30day
then uid
end)
) AS "unique_30day"
FROM segment_clean.users_sessions
WHERE session_start_time >= CURRENT_DATE - interval '3 months'
GROUP BY date(session_start_time)
Order BY running30day desc
I really thought this would work but when looking into the results, it appears I'm getting the same results as I was when doing the daily unique rather than the unique over 30days.
I am writing this query from Metabase using the SQL query editor. the underlying tables are in redshift.
If you read this far, thank you, your time has value and I appreciate the fact that you have spent some of it to read my question.
EDIT:
As rightfully requested, I added an example of the data set I'm working with and the desired outcome.
+-----+-------------------------------+
| UID | SESSION_START_TIME |
+-----+-------------------------------+
| | |
| 10 | 2020-01-13T01:46:07.000-05:00 |
| | |
| 5 | 2020-01-13T01:46:07.000-05:00 |
| | |
| 3 | 2020-01-18T02:49:23.000-05:00 |
| | |
| 9 | 2020-03-06T18:18:28.000-05:00 |
| | |
| 2 | 2020-03-06T18:18:28.000-05:00 |
| | |
| 8 | 2020-03-31T23:13:33.000-04:00 |
| | |
| 3 | 2020-08-28T18:23:15.000-04:00 |
| | |
| 2 | 2020-08-28T18:23:15.000-04:00 |
| | |
| 9 | 2020-08-28T18:23:15.000-04:00 |
| | |
| 3 | 2020-08-28T18:23:15.000-04:00 |
| | |
| 8 | 2020-09-15T16:40:29.000-04:00 |
| | |
| 3 | 2020-09-21T20:49:09.000-04:00 |
| | |
| 1 | 2020-11-05T21:31:48.000-05:00 |
| | |
| 6 | 2020-11-05T21:31:48.000-05:00 |
| | |
| 8 | 2020-12-12T04:42:00.000-05:00 |
| | |
| 8 | 2020-12-12T04:42:00.000-05:00 |
| | |
| 5 | 2020-12-12T04:42:00.000-05:00 |
+-----+-------------------------------+
bellow is what the result I would like looks like:
+------------+---------------------+
| DATE | UNIQUE 30 DAY COUNT |
+------------+---------------------+
| | |
| 2020-01-13 | 3 |
| | |
| 2020-01-18 | 1 |
| | |
| 2020-03-06 | 3 |
| | |
| 2020-03-31 | 1 |
| | |
| 2020-08-28 | 4 |
| | |
| 2020-09-15 | 2 |
| | |
| 2020-09-21 | 1 |
| | |
| 2020-11-05 | 2 |
| | |
| 2020-12-12 | 2 |
+------------+---------------------+
Thank you
You can approach this by keeping a counter of when users are counted and then uncounted -- 30 (or perhaps 31) days later. Then, determine the "islands" of being counted, and aggregate. This involves:
Unpivoting the data to have an "enters count" and "leaves" count for each session.
Accumulate the count so on each day for each user you know whether they are counted or not.
This defines "islands" of counting. Determine where the islands start and stop -- getting rid of all the detritus in-between.
Now you can simply do a cumulative sum on each date to determine the 30 day session.
In SQL, this looks like:
with t as (
select uid, date_trunc('day', session_start_time) as s_day, 1 as inc
from users_sessions
union all
select uid, date_trunc('day', session_start_time) + interval '31 day' as s_day, -1
from users_sessions
),
tt as ( -- increment the ins and outs to determine whether a uid is in or out on a given day
select uid, s_day, sum(inc) as day_inc,
sum(sum(inc)) over (partition by uid order by s_day rows between unbounded preceding and current row) as running_inc
from t
group by uid, s_day
),
ttt as ( -- find the beginning and end of the islands
select tt.uid, tt.s_day,
(case when running_inc > 0 then 1 else -1 end) as in_island
from (select tt.*,
lag(running_inc) over (partition by uid order by s_day) as prev_running_inc,
lead(running_inc) over (partition by uid order by s_day) as next_running_inc
from tt
) tt
where running_inc > 0 and (prev_running_inc = 0 or prev_running_inc is null) or
running_inc = 0 and (next_running_inc > 0 or next_running_inc is null)
)
select s_day,
sum(sum(in_island)) over (order by s_day rows between unbounded preceding and current row) as active_30
from ttt
group by s_day;
Here is a db<>fiddle.
I'm pretty sure the easier way to do this is to use a join. This creates a list of all the distinct users who had a session on each day and a list of all distinct dates in the data. Then it one-to-many joins the user list to the date list and counts the distinct users, the key here is the expanded join criteria that matches a range of dates to a single date via a system of inequalities.
with users as
(select
distinct uid,
date_trunc('day',session_start_time) AS dt
from <table>
where session_start_time >= '2021-05-01'),
dates as
(select
distinct date_trunc('day',session_start_time) AS dt
from <table>
where session_start_time >= '2021-05-01')
select
count(distinct uid),
dates.dt
from users
join
dates
on users.dt >= dates.dt - 29
and users.dt <= dates.dt
group by dates.dt
order by dt desc
;

Combine PARTITION BY and GROUP BY

I have a (mssql) table like this:
+----+----------+---------+--------+--------+
| id | username | date | scoreA | scoreB |
+----+----------+---------+--------+--------+
| 1 | jim | 01/2020 | 100 | 0 |
| 2 | max | 01/2020 | 0 | 200 |
| 3 | jim | 01/2020 | 0 | 150 |
| 4 | max | 02/2020 | 150 | 0 |
| 5 | jim | 02/2020 | 0 | 300 |
| 6 | lee | 02/2020 | 100 | 0 |
| 7 | max | 02/2020 | 0 | 200 |
+----+----------+---------+--------+--------+
What I need is to get the best "combined" score per date. (With "combined" score I mean the best scores per user and per date summarized)
The result should look like this:
+----------+---------+--------------------------------------------+
| username | date | combined_score (max(scoreA) + max(scoreB)) |
+----------+---------+--------------------------------------------+
| jim | 01/2020 | 250 |
| max | 02/2020 | 350 |
+----------+---------+--------------------------------------------+
I came this far:
I can group the scores by user like this:
SELECT
username, (max(scoreA) + max(scoreB)) AS combined_score,
FROM score_table
GROUP BY username
ORDER BY combined_score DESC
And I can get the best score per date with PARTITION BY like this:
SELECT *
FROM
(SELECT t.*, row_number() OVER (PARTITION BY date ORDER BY scoreA DESC) rn
FROM score_table t) as tmp
WHERE tmp.rn = 1
ORDER BY date
Is there a proper way to combine these statements and get the result I need? Thank you!
Btw. Don't care about possible ties!
You can combine window functions and aggregation functions like this:
SELECT s.*
FROM (SELECT username, date, (max(scoreA) + max(scoreB)) AS combined_score,
ROW_NUMBER() OVER (PARTITION BY date ORDER BY max(scoreA) + max(scoreB) DESC) as seqnum
FROM score_table
GROUP BY username, date
) s
ORDER BY combined_score DESC;
Note that date needs to be part of the aggregation.

Select latest values for group of related records

I have a table that accommodates data that is logically groupable by multiple properties (foreign key for example). Data is sequential over continuous time interval; i.e. it is a time series data. What I am trying to achieve is to select only latest values for each group of groups.
Here is example data:
+-----------------------------------------+
| code | value | date | relation_id |
+-----------------------------------------+
| A | 1 | 01.01.2016 | 1 |
| A | 2 | 02.01.2016 | 1 |
| A | 3 | 03.01.2016 | 1 |
| A | 4 | 01.01.2016 | 2 |
| A | 5 | 02.01.2016 | 2 |
| A | 6 | 03.01.2016 | 2 |
| B | 1 | 01.01.2016 | 1 |
| B | 2 | 02.01.2016 | 1 |
| B | 3 | 03.01.2016 | 1 |
| B | 4 | 01.01.2016 | 2 |
| B | 5 | 02.01.2016 | 2 |
| B | 6 | 03.01.2016 | 2 |
+-----------------------------------------+
And here is example of desired output:
+-----------------------------------------+
| code | value | date | relation_id |
+-----------------------------------------+
| A | 3 | 03.01.2016 | 1 |
| A | 6 | 03.01.2016 | 2 |
| B | 3 | 03.01.2016 | 1 |
| B | 6 | 03.01.2016 | 2 |
+-----------------------------------------+
To put this in perspective — for every related object I want to select each code with latest date.
Here is a select I came with. I've used ROW_NUMBER OVER (PARTITION BY...) approach:
SELECT indicators.code, indicators.dimension, indicators.unit, x.value, x.date, x.ticker, x.name
FROM (
SELECT
ROW_NUMBER() OVER (PARTITION BY indicator_id ORDER BY date DESC) AS r,
t.indicator_id, t.value, t.date, t.company_id, companies.sic_id,
companies.ticker, companies.name
FROM fundamentals t
INNER JOIN companies on companies.id = t.company_id
WHERE companies.sic_id = 89
) x
INNER JOIN indicators on indicators.id = x.indicator_id
WHERE x.r <= (SELECT count(*) FROM companies where sic_id = 89)
It works but the problem is that it is painfully slow; when working with about 5% of production data which equals to roughly 3 million fundamentals records this select take about 10 seconds to finish. My guess is that happens due to subselect selecting huge amounts of records first.
Is there any way to speed this query up or am I digging in wrong direction trying to do it the way I do?
Postgres offers the convenient distinct on for this purpose:
select distinct on (relation_id, code) t.*
from t
order by relation_id, code, date desc;
So your query uses different column names than your sample data, so it's hard to tell, but it looks like you just want to group by everything except for date? Assuming you don't have multiple most recent dates, something like this should work. Basically don't use the window function, use a proper group by, and your engine should optimize the query better.
SELECT mytable.code,
mytable.value,
mytable.date,
mytable.relation_id
FROM mytable
JOIN (
SELECT code,
max(date) as date,
relation_id
FROM mytable
GROUP BY code, relation_id
) Q1
ON Q1.code = mytable.code
AND Q1.date = mytable.date
AND Q1.relation_id = mytable.relation_id
Other option:
SELECT DISTINCT Code,
Relation_ID,
FIRST_VALUE(Value) OVER (PARTITION BY Code, Relation_ID ORDER BY Date DESC) Value,
FIRST_VALUE(Date) OVER (PARTITION BY Code, Relation_ID ORDER BY Date DESC) Date
FROM mytable
This will return top value for what ever you partition by, and for whatever you order by.
I believe we can try something like this
SELECT CODE,Relation_ID,Date,MAX(value)value FROM mytable
GROUP BY CODE,Relation_ID,Date

Full outer join on a table itself and run some window functions

Background
I have some ETL job processing real-time log files hourly. Whenever the system generates a new event, it will take a snapshot of all historical event summary (if exists) and record it together with the current event. Then the data is loaded into Redshift.
Example
The table looks like something below:
+------------+--------------+---------+-----------+-------+-------+
| current_id | current_time | past_id | past_time | freq1 | freq2 |
+------------+--------------+---------+-----------+-------+-------+
| 2 | time2 | 1 | time1 | 13 | 5 |
| 3 | time3 | 1 | time1 | 13 | 5 |
| 3 | time3 | 2 | time2 | 2 | 1 |
| 4 | time4 | 1 | time1 | 13 | 5 |
| 4 | time4 | 2 | time2 | 2 | 1 |
| 4 | time4 | 3 | time3 | 1 | 1 |
+------------+--------------+---------+-----------+-------+-------+
This is what happened for the above table:
time1: event 1 happened. System took a snapshot, but nothing is recorded.
time2: event 2 happened. System took a snapshot and record event 1.
time3: event 3 happened. System took a snapshot and record event 1 & 2.
time4: event 4 happened. System took a snapshot and record event 1, 2 & 3.
Desired Outcome
I will need to transform the data into the following format in order to do some analysis:
+----+------------+-------+-------+
| id | event_time | freq1 | freq2 |
+----+------------+-------+-------+
| 1 | time1 | 0 | 0 |
| 2 | time2 | 13 | 5 | -- 13 | 5
| 3 | time3 | 15 | 6 | -- 13 + 2 | 5 + 1
| 4 | time4 | 16 | 7 | -- 15 + 1 | 6 + 1
+----+------------+-------+-------+
Basically, the new freq1 and freq2 are cumulative sum of lagged freq1 and freq2.
My Idea
I am thinking of a self full outer join on current_id and past_id and achieve the following result first:
+----+------------+-------+-------+
| id | event_time | freq1 | freq2 |
+----+------------+-------+-------+
| 1 | time1 | 13 | 5 |
| 2 | time2 | 2 | 1 |
| 3 | time3 | 1 | 1 |
| 4 | time4 | null | null |
+----+------------+-------+-------+
Then I can do a window function of lag over() and then sum over().
Question
Is this the correct approach? Is there a more efficient way to do this? This is just a small sample of the actual data, so performance could be a concern.
My query is always returning a lot of duplicated values, so I am not sure what went wrong.
Solution
Answer from #GordonLinoff is correct for the above use case. I am adding some minor updates in order to get it working on my actual table. The only difference is that my event_id are some 36-character Java UUID and the event_time are timestamp.
select distinct past_id, past_time, 0 as freq1, 0 as freq2
from (
select past_id, past_time,
row_number() over (partition by current_id order by current_time desc) as seqnum
from t
) a
where a.seqnum = 1
union all
select current_id, current_time,
sum(freq1) over (order by current_time rows unbounded preceding) as freq1,
sum(freq2) over (order by current_time rows unbounded preceding) as freq2
from (
select current_id, current_time, freq1, freq2,
row_number() over (partition by current_id order by past_id desc) as seqnum
from t
) b
where b.seqnum = 1;
I'm thinking you want union all along with window functions. Here is an example:
select min(past_id) as id, min(past_time) as event_time, 0 as freq1, 0 as freq2
from t
union all
(select current_id, current_time,
sum(freq1) over (order by current_time),
sum(freq2) over (order by current_time)
from (select current_id, current_time, freq1, freq2,
row_number() over (partition by current_id order by past_id desc) as seqnum
from t
) t
where seqnum = 1
);
The way your data is in your snapshot table, I think the following SQL should give you what you are looking for in the desired outcome that you posted
SELECT 1 AS id
,"time1" AS event_time
,0 AS freq1
,0 AS freq2
UNION
SELECT T.id
,T.current_time AS event_time
,SUM(T.freq1) AS freq1
,SUM(T.freq2) AS freq2
FROM snapshot AS T
GROUP
BY T.id
,T.current_name
The first SELECT in the above UNION is so that you can get the first record for time1 since it does not really have an entry in your base table which holds all the snapshots.. It does not have a FROM in it since we are only selecting variables, if Redshift does not support it you might need to look for something equivalent to the DUAL table in Oracle.
Hope this helps..