Dont know how to solve the problem.May be you can show right direction or give a link.
I have a table:
id Date
23 01.01.2020
23 03.01.2020
23 04.01.2020
56 07.01.2020
56 08.01.2020
87 11.01.2020
23 12.01.2020
23 18.01.2020
I want to aggregate data (id, Date_min) and add new column like this one:
id Date_min Date_new
23 01.01.2020 07.01.2020
56 07.01.2020 11.01.2020
87 11.01.2020 12.01.2020
23 12.01.2020 18.01.2020
In column Data_new I want to see next user's first date. If there is no next user, add user`s max date
LEAD will give you the next date, but we also have the slight sticking problem that your ID repeats, so we need something to make the second 23 distinct from the first. For that I guess we can establish a counter that ticks up every time the ID changes:
with a as(
select '23' as id, '01.01.2020' as "date" union all
select '23' as id, '03.01.2020' as "date" union all
select '23' as id, '04.01.2020' as "date" union all
select '56' as id, '07.01.2020' as "date" union all
select '56' as id, '08.01.2020' as "date" union all
select '87' as id, '11.01.2020' as "date" union all
select '23' as id, '12.01.2020' as "date" union all
select '23' as id, '18.01.2020' as "date"
), b as (
SELECT *, LAG(id) OVER(ORDER BY "date") as last_id FROM a
), c AS(
SELECT *,
LEAD("date") OVER(ORDER BY "date") as next_date,
SUM(CASE WHEN last_id <> id THEN 1 ELSE 0 END) OVER(ORDER BY "date" ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) id_ctr
FROM b
)
SELECT id, MIN("date"), MAX(next_date)
FROM c
GROUP BY id, id_ctr
I haven't got a PG instance to test this on, but it works in SQLS and I'm pretty sure that PG supports everything that SQLS does - there isn't any SQLS specific stuff here
a takes the place of your table - you can drop it from your query and just straight d a with b as (select... from yourtablenamehere)
b calculates the previous ID; we'll use this to detect if the id has changed between current row and prev row. If it changes we'll put a 1 otherwise a 0. When these are summed as a running total it effectively means the counter ticks up every time the ID changes, so we can group by this counter as well as the ID to split our two 23s apart. We need to do this separately because window functions can't be nested
c takes the last_id and does the running total. It also does the next_date with a simple window function that pulls the date from the following row (rows ordered by date). the ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is techincally unnecessary as it's the default action for a SUM OVER ORDERBY, but I find being explicit helps document/change if needed
then all that is required is to select the id, min date and max next_date, but throw the counter in there too to split the 23s up - you're allowed to group by more columns than you select but not the other way round
This is a particularly simply type of gaps-and-islands problem.
You can simply use lag() to determine the first row of each bunch of rows and then a lead() to get date_new:
select id, date as date_min,
lead(date, 1, max_date) over (order by date) as date_max
from (select t.*,
lag(id) over (order by date) as prev_id,
max(date) over () as max_date
from t
) t
where prev_id is null or prev_id <> id;
Here is a db<>fiddle.
Three window functions and no aggregation: this should be by far the fastest approach to this problem.
Related
I have a process that occur every 30 days but can take few days.
How can I differentiate between each iteration in order to sum the output of the process?
for Example
the output I except is
Name
Date
amount
iteration (optional)
Sophia Liu
2016-01-01
4
1
Sophia Liu
2016-02-01
5
2
Nikki Leith
2016-01-02
5
1
Nikki Leith
2016-02-01
10
2
I tried using lag function on the date filed and using the difference between that column and the date column.
WITH base AS
(SELECT 'Sophia Liu' as name, DATE '2016-01-01' as date, 3 as amount
UNION ALL SELECT 'Sophia Liu', DATE '2016-01-02', 1
UNION ALL SELECT 'Sophia Liu', DATE '2016-02-01', 3
UNION ALL SELECT 'Sophia Liu', DATE '2016-02-02', 2
UNION ALL SELECT 'Nikki Leith', DATE '2016-01-02', 5
UNION ALL SELECT 'Nikki Leith', DATE '2016-02-01', 5
UNION ALL SELECT 'Nikki Leith', DATE '2016-02-02', 3
UNION ALL SELECT 'Nikki Leith', DATE '2016-02-03', 1
UNION ALL SELECT 'Nikki Leith', DATE '2016-02-04', 1)
select
name
,date
,lag(date) over (partition by name order by date) as lag_func
,date_diff(date,lag(date) over (partition by name order by date),day) date_differacne
,case when date_diff(date,lag(date) over (partition by name order by date),day) >= 10
or date_diff(date,lag(date) over (partition by name order by date),day) is null then true else false end as new_iteration
,amount
from base
Edited answer
After your clarification and looking at what's actually in your SQL code. I'm guessing you are looking for a solution to what's called a gaps and islands problem. That is, you want to identify the "islands" of activity and sum the amount for each iteration or island. Taking your example you can first identify the start of a new session (or "gap") and then use that to create a unique iteration ("island") identifier for each user. You can then use that identifier to perform a SUM().
gaps as (
select
name,
date,
amount,
if(date_diff(date, lag(date,1) over(partition by name order by date), DAY) >= 10, 1, 0) new_iteration
from base
),
islands as (
select
*,
1 + sum(new_iteration) over(partition by name order by date) iteration_id
from gaps
)
select
*,
sum(amount) over(partition by name, iteration_id) iteration_amount
from islands
Previous answer
Sounds like you just need a RANK() to count the iterations in your window functions. Depending on your need you can then sum cumulative or total amounts in a similar window function. Something like this:
select
name
,date
,rank() over (partition by name order by date) as iteration
,sum(amount) over (partition by name order by date) as cumulative_amount
,sum(amount) over (partition by name) as total_amount
,amount
from base
Following is the table:
start_date
recorded_date
id
2021-11-10
2021-11-01
1a
2021-11-08
2021-11-02
1a
2021-11-11
2021-11-03
1a
2021-11-10
2021-11-04
1a
2021-11-10
2021-11-05
1a
I need a query to find the total day changes in aggregate for a given id. In this case, it changed from 10th Nov to 8th Nov so 2 days, then again from 8th to 11th Nov so 3 days and again from 11th to 10th for a day, and finally from 10th to 10th, that is 0 days.
In total there is a change of 2+3+1+0 = 6 days for the id - '1a'.
Basically for each change there is a recorded_date, so we arrange that in ascending order and then calculate the aggregate change of days grouped by id. The final result should be like:
id
Agg_Change
1a
6
Is there a way to do this using SQL. I am using vertica database.
Thanks.
you can use window function lead to get the difference between rows and then group by id
select id, sum(daydiff) Agg_Change
from (
select id, abs(datediff(day, start_Date, lead(start_date,1,start_date) over (partition by id order by recorded_date))) as daydiff
from tablename
) t group by id
It's indeed the use of LAG() to get the previous date in an OLAP query, and an outer query getting the absolute date difference, and the sum of it, grouping by id:
WITH
-- your input - don't use in real query ...
indata(start_date,recorded_date,id) AS (
SELECT DATE '2021-11-10',DATE '2021-11-01','1a'
UNION ALL SELECT DATE '2021-11-08',DATE '2021-11-02','1a'
UNION ALL SELECT DATE '2021-11-11',DATE '2021-11-03','1a'
UNION ALL SELECT DATE '2021-11-10',DATE '2021-11-04','1a'
UNION ALL SELECT DATE '2021-11-10',DATE '2021-11-05','1a'
)
-- real query starts here, replace following comma with "WITH" ...
,
w_lag AS (
SELECT
id
, start_date
, LAG(start_date) OVER w AS prevdt
FROM indata
WINDOW w AS (PARTITION BY id ORDER BY recorded_date)
)
SELECT
id
, SUM(ABS(DATEDIFF(DAY,start_date,prevdt))) AS dtdiff
FROM w_lag
GROUP BY id
-- out id | dtdiff
-- out ----+--------
-- out 1a | 6
I was thinking lag function will provide me the answer, but it kept giving me wrong answer because I had the wrong logic in one place. I have the answer I need:
with cte as(
select id, start_date, recorded_date,
row_number() over(partition by id order by recorded_date asc) as idrank,
lag(start_date,1) over(partition by id order by recorded_date asc) as prev
from table_temp
)
select id, sum(abs(date(start_date) - date(prev))) as Agg_Change
from cte
group by 1
If someone has a better solution please let me know.
Let's say I have a BigQuery table "events" (in reality this is a slow sub-query) that stores the count of events per day, by event type. There are many types of events and most of them don't occur on most days, so there is only a row for day/event type combinations with a non-zero count.
I have a query that returns the count for each event type and day and the count for that event from N days ago, which looks like this:
WITH events AS (
SELECT DATE('2019-06-08') AS day, 'a' AS type, 1 AS count
UNION ALL SELECT '2019-06-09', 'a', 2
UNION ALL SELECT '2019-06-10', 'a', 3
UNION ALL SELECT '2019-06-07', 'b', 4
UNION ALL SELECT '2019-06-09', 'b', 5
)
SELECT e1.type, e1.day, e1.count, COALESCE(e2.count, 0) AS prev_count
FROM events e1
LEFT JOIN events e2 ON e1.type = e2.type AND e1.day = DATE_ADD(e2.day, INTERVAL 2 DAY) -- LEFT JOIN, because the event may not have occurred at all 2 days ago
ORDER BY 1, 2
The query is slow. BigQuery best practices recommend using window functions instead of self-joins. Is there a way to do this here? I could use the LAG function if there was a row for each day, but there isn't. Can I "pad" it somehow? (There isn't a short list of possible event types. I could of course join to SELECT DISTINCT type FROM events, but that probably won't be faster than the self-join.)
A brute force method is:
select e.*,
(case when lag(day) over (partition by type order by date) = dateadd(e.day, interval -2 day)
then lag(cnt) over (partition by type order by date)
when lag(day, 2) over (partition by type order by date) = dateadd(e.day, interval -2 day)
then lag(cnt, 2) over (partition by type order by date)
end) as prev_day2_count
from events e;
This works fine for a two day lag. It gets more cumbersome for longer lags.
EDIT:
A more general form uses window frames. Unfortunately, these have to be numeric so there is an additional step:
select e.*,
(case when min(day) over (partition by type order by diff range between 2 preceding and current day) = date_add(day, interval -2 day)
then first_value(cnt) over (partition by type order by diff range between 2 preceding and current day)
end)
from (select e.*,
date_diff(day, max(day) over (partition by type), day) as diff -- day is a bad name for a column because it is a date part
from events e
) e;
And duh! The case expression is not necessary:
select e.*,
first_value(cnt) over (partition by type order by diff range between 2 preceding and 2 preceding)
from (select e.*,
date_diff(day, max(day) over (partition by type), day) as diff -- day is a bad name for a column because it is a date part
from events e
) e;
Below is for BigQuery Standard SQL
#standardSQL
SELECT *, IFNULL(FIRST_VALUE(count) OVER (win), 0) prev_count
FROM `project.dataset.events`
WINDOW win AS (PARTITION BY type ORDER BY UNIX_DATE(day) RANGE BETWEEN 2 PRECEDING AND 2 PRECEDING)
If t apply to sample data from your question - result is:
Row day type count prev_count
1 2019-06-08 a 1 0
2 2019-06-09 a 2 0
3 2019-06-10 a 3 1
4 2019-06-07 b 4 0
5 2019-06-09 b 5 4
I'm having trouble getting a cumulative distinct count so let's just assume the below dataset.
DATE RID
1/1/18 1
1/1/18 2
1/1/18 3
1/1/18 3
So if we run this query
SELECT DATE, COUNT(DISTINCT RID) FROM TABLE;
we would expect it to return 3, however let's assume that the data for the next day is as follows.
DATE RID
1/2/18 1
1/2/18 6
1/2/18 9
How would you write a query to get the following results where the data for 1/1/18 is considered when returning the distinct for 1/2/18.
So it would be the following results.
Date Count(*)
1/1/18 3
1/2/18 5 <- 1/1/18 distinct plus + 1/2 distinct.
Hope that makes sense, keep in mind this is a very large dataset if that changes things.
You can do a cumulative count of the earliest date for each rid:
select mindate, count(*), sum(count(*)) over (order by mindate)
from (select rid, min(date) as mindate
from t
group by rid
) t
group by mindate
order by mindate;
Note: This will be missing dates that is not a mindate for some rid. Here is one way to get all the dates, if that is an issue:
select mindate, count(rid), sum(count(rid)) over (order by mindate)
from ((select rid, min(date) as mindate
from t
group by rid
)
union all
(select distinct NULL, date
from t
)
) rd
group by mindate
order by mindate;
Below query can give required cumulative distinct count.
--Step 3:
SELECT dt,
cum_distinct_cnt
FROM (
--Step 2:
SELECT rid,
dt,
COUNT(CASE WHEN row_num = 1 THEN rid END) OVER (ORDER BY dt ROWS BETWEEN Unbounded PRECEDING AND CURRENT ROW) cum_distinct_cnt
FROM (
--Step 1:
SELECT rid,
dt,
ROW_NUMBER() OVER (PARTITION BY rid ORDER BY dt) row_num
FROM table) innerTab1
) innerTab2
QUALIFY ROW_NUMBER() OVER (PARTITION BY dt ORDER BY cum_distinct_cnt DESC) = 1
Since your dataset is very large, you can break the below query on steps as explained in query and create work tables to populate innerTab1/ innerTab2 to get final output
I have this example table:
group|type|sold|date
x 1 10 201801
x 1 44 201705
y 3 33 201801
y 3 3 201705
x 2 10 201701
I'm having trouble returning:
one record for each group and type, with the amount sold at the most recent date. date is an integer
i.e.
group|type|sold|date
x 1 10 201801
y 3 33 201801
x 2 10 201701
I tried selecting each column, sum(sold), max(cast(date to int)), grouping the rest, but it doesn't work.
I tried WHERE date IN (select max(date)). I couldn't get that to work either.
This is much trickier than I thought!
This is a simple application of the aggregate LAST() function (many developers choose to ignore it, for reasons that escape me). I use sum(sold) just in case there are several rows for the same max date in a group.
Please note that GROUP and DATE (and TYPE, too, actually) are Oracle keywords and should not be used as column names. I changed GROUP to GRP and DATE to DT, and you should do the same.
select grp, type, sum(sold) keep (dense_rank last order by dt) as sold, max(dt) as dt
from <table_name>
group by grp, type
You can use row_number() if you like. I have used the column names date_t and group_t instead of date and group resp.
SELECT group_t,
TYPE,
sold,
date_t
FROM (SELECT group_t,
TYPE,
sold,
date_t,
row_number()
over (
PARTITION BY group_t, TYPE
ORDER BY date_t DESC ) rn
FROM table1)
WHERE rn = 1
ORDER BY date_t DESC;