Row number with condition - sql

I want to increase the row number of a partition based on a condition. This question refers to the same problem, but in my case, the column I want to condition on is another window function.
I want to identify the session number of each user (id) depending on how long ago was their last recorded action (ts).
My table looks as follows:
id ts
1 2022-08-01 09:00:00 -- user 1, first session
1 2022-08-01 09:10:00
1 2022-08-01 09:12:00
1 2022-08-03 12:00:00 -- user 1, second session
1 2022-08-03 12:03:00
2 2022-08-01 11:04:00 -- user 2, first session
2 2022-08-01 11:07:00
2 2022-08-25 10:30:00 -- user 2, second session
2 2022-08-25 10:35:00
2 2022-08-25 10:36:00
I want to assign each user a session identifier based on the following conditions:
If the user's last action was 30 or more minutes ago (or doesn't exist), then increase (or initialize) the row number.
If the user's last action was less than 30 minutes ago, don't increase the row number.
I want to get the following result:
id ts session_id
1 2022-08-01 09:00:00 1
1 2022-08-01 09:10:00 1
1 2022-08-01 09:12:00 1
1 2022-08-03 12:00:00 2
1 2022-08-03 12:03:00 2
2 2022-08-01 11:04:00 1
2 2022-08-01 11:07:00 1
2 2022-08-25 10:30:00 2
2 2022-08-25 10:35:00 2
2 2022-08-25 10:36:00 2
If I had a separate column with the seconds since their last session, I could simply add 1 to each user's partitioned sum. However, this column is a window function itself. Hence, the following query doesn't work:
select
id
,ts
,extract(
epoch from (
ts - lag(ts, 1) over(partition by id order by ts)
)
) as seconds_since -- Number of seconds since last action (works well)
,sum(
case
when coalesce(
extract(
epoch from (
ts - lag(ts, 1) over (partition by id order by ts)
)
), 1800
) >= 1800 then 1
else 0 end
) over (partition by id order by ts) as session_id -- Window inside window (crashes)
from
t
order by
id
,ts
ERROR: Aggregate window functions with an ORDER BY clause require a frame clause

Use LAG() window function to get the previous ts of each row and create flag column indicating if the difference between the 2 timestamps is greater than 30 minutes.
Then use SUM() window function over that flag:
SELECT
id
,ts
,SUM(flag) OVER (
PARTITION BY id
ORDER BY ts
rows unbounded preceding -- necessary in aws-redshift
) as session_id
FROM (
SELECT
*
,COALESCE((LAG(ts) OVER (PARTITION BY id ORDER BY ts) < ts - INTERVAL '30 minute')::int, 1) flag
FROM
tablename
) t
;
See the demo.

Related

How to filter out multiple downtime events in SQL Server?

There is a query I need to write that will filter out multiples of the same downtime event. These records get created at the exact same time with multiple different timestealrs which I don't need. Also, in the event of multiple timestealers for a downtime event I need to make the timestealer 'NULL' instead.
Example table:
Id
TimeStealer
Start
End
Is_Downtime
Downtime_Event
1
Machine 1
2022-01-01 01:00:00
2022-01-01 01:01:00
1
Malfunction
2
Machine 2
2022-01-01 01:00:00
2022-01-01 01:01:00
1
Malfunction
3
NULL
2022-01-01 00:01:00
2022-01-01 00:59:59
0
Operating
What I need the query to return:
Id
TimeStealer
Start
End
Is_Downtime
Downtime_Event
1
NULL
2022-01-01 01:00:00
2022-01-01 01:01:00
1
Malfunction
2
NULL
2022-01-01 00:01:00
2022-01-01 00:59:59
0
Operating
Seems like this is a top 1 row of each group, but with the added logic of making a column NULL when there are multiple rows. You can achieve that by also using a windowed COUNT, and then a CASE expression in the outer SELECT to only return the value of TimeStealer when there was 1 event:
WITH CTE AS(
SELECT V.Id,
V.TimeStealer,
V.Start,
V.[End],
V.Is_Downtime,
V.Downtime_Event,
ROW_NUMBER() OVER (PARTITION BY V.Start, V.[End], V.Is_Downtime,V.Downtime_Event ORDER BY ID) AS RN,
COUNT(V.ID) OVER (PARTITION BY V.Start, V.[End], V.Is_Downtime,V.Downtime_Event) AS Events
FROM(VALUES('1','Machine 1',CONVERT(datetime2(0),'2022-01-01 01:00:00'),CONVERT(datetime2(0),'2022-01-01 01:01:00'),'1','Malfunction'),
('2','Machine 2',CONVERT(datetime2(0),'2022-01-01 01:00:00'),CONVERT(datetime2(0),'2022-01-01 01:01:00'),'1','Malfunction'),
('3','NULL',CONVERT(datetime2(0),'2022-01-01 00:01:00'),CONVERT(datetime2(0),'2022-01-01 00:59:59'),'0','Operating'))V(Id,TimeStealer,[Start],[End],Is_Downtime,Downtime_Event))
SELECT ROW_NUMBER() OVER (ORDER BY ID) AS ID,
CASE WHEN C.Events = 1 THEN C.TimeStealer END AS TimeStealer,
C.Start,
C.[End],
C.Is_Downtime,
C.Downtime_Event
FROM CTE C
WHERE C.RN = 1;

SQL BigQuery - COUNTIF with criteria from current row and partitioned rows

I'm running this line of code:
COUNTIF(
type = "credit"
AND
DATETIME_DIFF(credit_window_end, start_at_local_true_01, DAY) BETWEEN 0 and 5
)
over (partition by case_id order by start_at_local_true_01
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
as credit_count_per_case_id_in_future_and_within_credit_window,
And I'm getting this
Row
case_id
start_at_local_true_01
type
credit_window_end
credit_count_per_case_id_in_future_and_within_credit_window
1
12123
2022-02-01 11:00:00
null
2022-02-06 11:00:00
0
2
12123
2022-02-01 11:15:00
run
null
0
3
12123
2022-02-01 11:21:00
jump
2022-02-06 11:21:00
0
4
12123
2022-02-04 11:31:00
run
2022-02-09 11:31:00
0
5
12123
2022-02-05 11:34:00
jump
null
0
6
12123
2022-02-08 12:38:00
credit
null
0
7
12555
2022-02-01 11:15:00
null
null
0
But I want this
Row
case_id
start_at_local_true_01
type
credit_window_end
credit_count_per_case_id_in_future_and_within_credit_window
1
12123
2022-02-01 11:00:00
null
2022-02-06 11:00:00
0
2
12123
2022-02-01 11:15:00
run
null
0
3
12123
2022-02-01 11:21:00
jump
2022-02-06 11:21:00
0
4
12123
2022-02-04 11:31:00
run
2022-02-09 11:31:00
1
5
12123
2022-02-05 11:34:00
jump
null
0
6
12123
2022-02-08 12:38:00
credit
null
0
7
12555
2022-02-01 11:15:00
null
null
0
The 4th row should be 1 because (from the 6th row) credit = credit AND DATETIMEDIFF(2022-02-08T12:38:00, 2022-02-04 11:31:00, DAY) between 0 and 5
The calculation within the cell would look like this:
COUNTIF(
run = credit AND DATETIMEDIFF(2022-02-04 11:31:00, 2022-02-04T11:31:00, DAY ) between 0 and 5
jump = credit AND DATETIMEDIFF(2022-02-04 11:31:00, 2022-02-05T11:34:00, DAY ) between 0 and 5
credit = credit AND DATETIMEDIFF(2022-02-04 11:31:00, 2022-02-08T12:38:00, DAY ) between 0 and 5
)
COUNTIF(
false and false
false and false
true and true
)
COUNTIF(
0
0
1
)
I think I know why, but I don't know how to fix it.
It's because the DATETIME_DIFF function is taking both values from the same row (from each partitioned row). The second element should stay the same (start_at_local_true_01). But I want the first element to be fixed to the CURRENT ROW's credit_window_end (not each partitioned row's credit_window_end).
This is my code so far (including sample table):
with data_table as(
select * FROM UNNEST(ARRAY<STRUCT<
case_id INT64, start_at_local_true_01 DATETIME, type STRING, credit_window_end DATETIME>>
[
(12123, DATETIME("2022-02-01 11:00:00"), null, DATETIME("2022-02-06 11:00:00"))
,(12123, DATETIME("2022-02-01 11:15:00"), 'run', null)
,(12123, DATETIME("2022-02-01 11:21:00"), 'jump', DATETIME("2022-02-06 11:21:00"))
,(12123, DATETIME("2022-02-04 11:31:00"), 'run', DATETIME("2022-02-09 11:31:00"))
,(12123, DATETIME("2022-02-05 11:34:00"), 'jump', null)
,(12123, DATETIME("2022-02-08 12:38:00"), 'credit', null)
,(12555, DATETIME("2022-02-01 11:15:00"), null, null)
]
)
)
select
data_table.*,
COUNTIF(
type = "credit"
)
over (partition by case_id order by start_at_local_true_01
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
as credit_count_per_case_id_in_future,
COUNTIF(
type = "credit"
AND
DATETIME_DIFF(start_at_local_true_01, credit_window_end, DAY) BETWEEN 0 and 5
)
over (partition by case_id order by start_at_local_true_01
ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
as credit_count_per_case_id_in_future_and_within_credit_window,
--does not work. does not even run
-- DATETIME_DIFF(
-- credit_window_end,
-- array_agg(
-- IFNULL(start_at_local_true_01,DATETIME("2000-01-01 00:00:00"))
-- )
-- over (partition by case_id order by start_at_local_true_01 asc
-- ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING)
-- , DAY
-- )
-- as credit_count_per_case_id_in_future_and_within_credit_window_02,
from data_table
Thanks for the help!
As confirmed by #Phil in the comments, this was solved by changing the window to:
over (partition by case_id order by UNIX_MILLIS(TIMESTAMP(start_at_local_true_01)) RANGE BETWEEN CURRENT ROW AND 432000000 FOLLOWING)
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.

sql query using time series

I have the below table in bigquery:
Timestamp variant_id activity
2020-04-02 08:50 1 active
2020-04-03 07:39 1 not_active
2020-04-04 07:40 1 active
2020-04-05 10:22 2 active
2020-04-07 07:59 2 not_active
I want to query this subset of data to get the number of active variant per day.
If variant_id 1 is active at date 2020-04-04, it still active the follwing dates also 2020-04-05, 2020-04-06 until the value activity column is not_active , the goal is to count each day the number of variant_id who has the value active in the column activity, but I should take into account that each variant_id has the value of the last activity on a specific date.
for example the result of the desired query in the subset data must be:
Date activity_count
2020-04-02 1
2020-04-03 0
2020-04-04 1
2020-04-05 2
2020-04-06 2
2020-04-07 1
2020-04-08 1
2020-04-09 1
2020-04-10 1
any help please ?
Consider below approach
select date, count(distinct if(activity = 'active', variant_id, null)) activity_count
from (
select date(timestamp) date, variant_id, activity,
lead(date(timestamp)) over(partition by variant_id order by timestamp) next_date
from your_table
), unnest(generate_date_array(date, ifnull(next_date - 1, '2020-04-10'))) date
group by date
if applied to sample data in your question - output is

adjust date overlaps within a group

I have this table and I want to adjust END_DATE one day prior to the next ST_DATE in case if there are overlap dates for a group of ID
TABLE HAVE
ID ST_DATE END_DATE
1 2020-01-01 2020-02-01
1 2020-05-10 2020-05-20
1 2020-05-18 2020-06-19
1 2020-11-11 2020-12-01
2 1999-03-09 1999-05-10
2 1999-04-09 2000-05-10
3 1999-04-09 2000-05-10
3 2000-06-09 2000-08-16
3 2000-08-17 2009-02-17
Below is what I'm looking for
TABLE WANT
ID ST_DATE END_DATE
1 2020-01-01 2020-02-01
1 2020-05-10 2020-05-17 =====changed to a day less than the next ST_DATE due to some sort of overlap
1 2020-05-18 2020-06-19
1 2020-11-11 2020-12-01
2 1999-03-09 1999-04-08 =====changed to a day less than the next ST_DATE due to some sort of overlap
2 1999-04-09 2000-05-10
3 1999-04-09 2000-05-10
3 2000-06-09 2000-08-16
3 2000-08-17 2009-02-17
Maybe you can use LEAD() for this. Initial idea:
select
id, st_date, end_date
, lead( st_date ) over ( partition by id order by st_date ) nextstart_
from overlap
;
-- result
ID ST_DATE END_DATE NEXTSTART
---------- --------- --------- ---------
1 01-JAN-20 01-FEB-20 10-MAY-20
1 10-MAY-20 20-MAY-20 18-MAY-20
1 18-MAY-20 19-JUN-20 11-NOV-20
1 11-NOV-20 01-DEC-20
2 09-MAR-99 10-MAY-99 09-APR-99
2 09-APR-99 10-MAY-00
3 09-APR-99 10-MAY-00 09-JUN-00
3 09-JUN-00 16-AUG-00 17-AUG-00
3 17-AUG-00 17-FEB-09
Once you have the next start date and the end_date side by side (as it were),
you can use CASE ... for adjusting the dates as you need them.
select ilv.id, ilv.st_date
, case
when ilv.end_date > ilv.nextstart_ then
to_char( ilv.nextstart_ - 1 ) || ' <- modified end date'
else
to_char( ilv.end_date )
end dt_modified
from (
select
id, st_date, end_date
, lead( st_date ) over ( partition by id order by st_date ) nextstart_
from overlap
) ilv
;
ID ST_DATE DT_MODIFIED
---------- --------- ---------------------------------------
1 01-JAN-20 01-FEB-20
1 10-MAY-20 17-MAY-20 <- modified end date
1 18-MAY-20 19-JUN-20
1 11-NOV-20 01-DEC-20
2 09-MAR-99 08-APR-99 <- modified end date
2 09-APR-99 10-MAY-00
3 09-APR-99 10-MAY-00
3 09-JUN-00 16-AUG-00
3 17-AUG-00 17-FEB-09
DBfiddle here.
If two "windows" for the same id have the same start date, then the problem doesn't make sense. So, let's assume that the problem makes sense - that is, the combination (id, st_date) is unique in the inputs.
Then, the problem can be formulated as follows: for each id, order rows by st_date ascending. Then, for each row, if its end_dt is less than the following st_date, return the row as is. Otherwise replace end_dt with the following st_date, minus 1. This last step can be achieved with the analytic lead() function.
A solution might look like this:
select id, st_date,
least(end_date, lead(st_date, 1, end_date + 1)
over (partition by id order by st_date) - 1) as end_date
from have
;
The bit about end_date + 1 in the lead function handles the last row for each id. For such rows there is no "next" row, so the default application of lead will return null. The default can be overridden by using the third parameter to the function.

Get MAX count but keep the repeated calculated value if highest

I have the following table, I am using SQL Server 2008
BayNo FixDateTime FixType
1 04/05/2015 16:15:00 tyre change
1 12/05/2015 00:15:00 oil change
1 12/05/2015 08:15:00 engine tuning
1 04/05/2016 08:11:00 car tuning
2 13/05/2015 19:30:00 puncture
2 14/05/2015 08:00:00 light repair
2 15/05/2015 10:30:00 super op
2 20/05/2015 12:30:00 wiper change
2 12/05/2016 09:30:00 denting
2 12/05/2016 10:30:00 wiper repair
2 12/06/2016 10:30:00 exhaust repair
4 12/05/2016 05:30:00 stereo unlock
4 17/05/2016 15:05:00 door handle repair
on any given day need do find the highest number of fixes made on a given bay number, and if that calculated number is repeated then it should also appear in the resultset
so would like to see the result set as follows
BayNo FixDateTime noOfFixes
1 12/05/2015 00:15:00 2
2 12/05/2016 09:30:00 2
4 12/05/2016 05:30:00 1
4 17/05/2016 15:05:00 1
I manage to get the counts of each but struggling to get the max and keep the highest calculated repeated value. can someone help please
Use window functions.
Get the count for each day by bayno and also find the min fixdatetime for each day per bayno.
Then use dense_rank to compute the highest ranked row for each bayno based on the number of fixes.
Finally get the highest ranked rows.
select distinct bayno,minfixdatetime,no_of_fixes
from (
select bayno,minfixdatetime,no_of_fixes
,dense_rank() over(partition by bayno order by no_of_fixes desc) rnk
from (
select t.*,
count(*) over(partition by bayno,cast(fixdatetime as date)) no_of_fixes,
min(fixdatetime) over(partition by bayno,cast(fixdatetime as date)) minfixdatetime
from tablename t
) x
) y
where rnk = 1
Sample Demo
You are looking for rank() or dense_rank(). I would right the query like this:
select bayno, thedate, numFixes
from (select bayno, cast(fixdatetime) as date) as thedate,
count(*) as numFixes,
rank() over (partition by cast(fixdatetime as date) order by count(*) desc) as seqnum
from t
group by bayno, cast(fixdatetime as date)
) b
where seqnum = 1;
Note that this returns the date in question. The date does not have a time component.