SQL rollup on sessions - sql

I have an impression event table that has a bunch of timestamps and marked start/end boundaries. I am trying to roll it up to have a metric that says "this session contains at least 1 impression with feature x". I'm not sure how exactly to do this. Any help would be appreciated. Thanks.
I want to roll this up into something that looks like:
account, session_start, session_end, interacted_with_feature
3004514, 2018-02-23 13:43:35.475, 2018-02-23 13:43:47.377, FALSE
where it is simple for me to say if this session had any interactions with the feature or not.

Perhaps aggregation does what you want:
select account, min(timestamp), max(timestamp), max(interacted_with_feature)
from t
group by account;

I was able to solve this with conditional cumulative sums to generate a session group ID for each row.
with cte as (
select *
, sum(case when session_boundary = 'start' then 1 else 0 end)
over (partition by account order by timestamp rows unbounded preceding)
as session_num
from raw_sessions
)
select account
, session_num
, min(timestamp) as session_start
, max(timestamp) as session_end
, bool_or(interacted_with_feature) as interacted_with_feature
from cte
group by account, session_num

Related

Oracle SQL Count and Avg in the same query

I have the table PATIENT_SESSIONS with these fields:
PATIENT_ID,
Session_Date,
Session_Status (Scheduled, Completed, Canceled),
PATIENT_Paid_Date,
Amount
I want from this table to get for each patient_id the last session_date, the average between PATIENT_Paid_Date and Session_Date, the max(Amount) and count of Complete sessions in a single query.
Is it possible?
Guessing PATIENT_ID is what you mean by "for each student_id"?
SELECT
PATIENT_ID
, MAX(Session_Date) AS last_session_date
, AVG(Session_Date - PATIENT_Paid_Date) AS avg_between_dates
-- not sure if this is what you want without seeing sample data
, MAX(Amount) AS max_amount
, SUM(CASE WHEN Session_Status = 'Completed' THEN 1 ELSE 0 END)
AS count_complete_sessions
FROM PATIENT_SESSIONS
GROUP BY PATIENT_ID
Should be possible.

How to increment a parent group number when the child window has incrementing values?

I am using Spark SQL 3.2.0
Please see the DB Fiddle link for a simplified example of my dataset and desired outcome.
In abstract, I have a dataset with a series of related events that can be grouped by their time order and event number. When ordering by time and event number, every time the event number resets to 1, you're looking at a new set of events.
I understand how to use row_number() or dense_rank() to increment event_group_number where sub_event_number = 1, but I'm uncertain how to make the rows where sub_event_number > 1 take on the correct event_group_number.
I'm currently doing the following:
case
when sub_event_number = 1 and is_event_type
then row_number() over (partition by context_id, event_id, sub_event_number order by is_event_type asc, start_time asc) - 1
else null
end as event_group_number
I'd be grateful for any help, and I'm happy to answer any questions.
It seems you're looking for a cumulative conditional sum:
SELECT context_id,
event_id,
start_time,
NULLIF(
SUM(CASE WHEN sub_event_number = 1 THEN 1 ELSE 0 END) OVER(
PARTITION BY context_id, event_id
ORDER BY is_event_type, start_time) - 1,
0
) AS event_group_number
FROM foobar
ORDER BY context_id, event_id, is_event_type, start_time
db-fiddle

How to generate session_id by sql?

My tracking system do not generate sessions IDS.
I have user_id & event_date_time.
I need a new session_id for each user's session that starts 30 minutes or more after last event_date_time of each user.
My final goal is to calculate median session time.
I tried to generate session_id=1 and session_id=2 once event_date_time-next_event_time>30 and guid=guid, but i'm stuck from here
select a.*,
case when (a.next_event_date-a.event_date)*24*60<30 and userID=next_userID
then 1
when (a.next_event_date-a.event_date)*24*60>=30 and userID=next_userID then
2
end session_id
from
(select f.userID,
lead(f.userID) over (partition by f.guid order by f.event_date)
next_guid,
f.event_date,
lead(f.event_date) over (partition by f.guid order by f.event_date)
next_event_date
from event_table f
)a
where next_event_date is not null
If I understood correctly you could generate ID's this way:
select id, guid, event_date,
sum(chg) over (partition by guid order by event_date) session_id
from (
select id, guid, event_date,
case when lag(guid) over (partition by guid order by event_date) = guid
and 24 * 60 * (event_date -lag(event_date)
over (partition by guid order by event_date) ) < 30
then 0 else 1
end chg
from event_table ) a
dbfiddle demo
Compare neighbouring rows, if there are different guids or time difference is greater than 30 minutes then assign 1. Then sum these values analytically.
I think you're on the right track using lead or lag. My recommendation would be to break this into steps and create a temp table to work against:
With the first query, assign every record its own unique ID, either a sequence number or GUID. You could also capture some of the lagged data in this step.
With a second query, find the overlaps (< 30 minutes) and make the overlapping records all the same -- either the same as the earliest or latest in that grouping, doesn't matter as long as it's consistent.
Something like this:
create table events_temp as (
select f.*,
row_number() over (partition by f.userID order by f.event_date) as user_row,
lag(f.userID) over (partition by f.userID order by f.event_date) as prev_userID,
lag(f.event_date) over (partition by f.userID order by f.event_date) as prev_event_date
from event_table f
order by f.userId, f.event_date
)
select a.*,
case when prev_userID = userID
and 24 * 60 * (event_date - prev_event_date) < 30
then lag(user_row) over (partition by userID order by user_row)
else user_row
end as session_id
from events_temp

Optimizing sum() over(order by...) clause throwing 'resources exceeded' error

I'm computing a sessions table from event data from out website in BigQuery. The events table has around 12 million events (pretty small). After I add in the logic to create sessions, I want to sum all sessions and assign a global_session_id. I'm doing that using a sum()over(order by...) clause which is throwing a resources exceeded error. I know that the order by clause is causing all the data to be processed on a single node and that is causing the compute resources to be exceeded, but I'm not sure what changes I can make to my code to achieve the same result. Any work arounds, advice, or explanations are greatly appreciated.
with sessions_1 as ( /* Tie a visitor's last event and last campaign to current event. */
select visitor_id as session_user_id,
sent_at,
context_campaign_name,
event,
id,
LAG(sent_at,1) OVER (PARTITION BY visitor_id ORDER BY sent_at) as last_event,
LAG(context_campaign_name,1) OVER (PARTITION BY visitor_id ORDER BY sent_at) as last_event_campaign_name
from tracks_2
),
sessions_2 as ( /* Flag events that begin a new session. */
select *,
case
when context_campaign_name != last_event_campaign_name
or context_campaign_name is null and last_event_campaign_name is not null
or context_campaign_name is not null and last_event_campaign_name is null
then 1
when unix_seconds(sent_at)
- unix_seconds(last_event) >= (60 * 30)
or last_event is null
then 1
else 0
end as is_new_session
from sessions_1
),
sessions_3 as ( /* Assign events sessions numbers for total sessions and total user sessions. */
select id as event_id,
sum(is_new_session) over (order by session_user_id, sent_at) as global_session_id
#sum(is_new_session) over (partition by session_user_id order by sent_at) as user_session_id
from materialized_result_of_sessions_2_query
)
select * from sessions_3
If might help if you defined a CTE with just the sessions, rather than at the event level. If this works:
select session_user_id, sent_at,
row_number() over (order by session_user_id, sent_at) as global_session_id
from materialized_result_of_sessions_2_query
where is_new_session
group by session_user_id, sent_at;
If that doesn't work, you can construct the global id:
You can join this back to the original event-level data and then use a max() window function to assign it to all events. Something like:
select e.*,
max(s.global_session_id) over (partition by e.session_user_id order by e.event_at) as global_session_id
from events e left join
(<above query>) s
on s.session_user_id = e.session_user_id and s.sent_at = e.event_at;
If not, you can do:
select us.*, us.user_session_id + s.offset as global_session_id
from (select session_user_id, sent_at,
row_number() over (partition by session_user_id order by sent_at) as user_session_id
from materialized_result_of_sessions_2_query
where is_new_session
) us join
(select session_user_id, count(*) as cnt,
sum(count(*)) over (order by session_user_id) - count(*) as offset
from materialized_result_of_sessions_2_query
where is_new_session
group by session_user_id
) s
on us.session_user_id = s.session_user_id;
This might still fail if almost all users are unique and the sessions are short.

Writing an analytic function to mark last date of some column

Can you help me to write an analytic function that marks the last date a client's service was stopped. For example one client has 2-3 stops of his service, and I would like to count how many stops there are, and to mark last date of stopping.
I'm using
SELECT column_name1, column_name2, column_name3, column_name4
, ROW_NUMBER() OVER(PARTITION BY column_name3 ORDER BY column_name4) AS Something
FROM ...
WHERE ...
ORDER B
where column_name3 contains status - is service stopped, and column_name4 contains date of last stop.
I hope I understood you correctly
select unique column_name1, column_name2, column_name3,
max(when column_name3 = 'inactive' then column_name4 end) over(partition by column_name1) last_date,
count(when column_name3 = 'inactive' then 1 end) over(partition by column_name1) cnt