Add first and last date of a sequence - sql

I am working on a database which have a huge collection of rows. I want to update it so repeated records will be deleted. Now, I have a date column in table and I want to convert it into startDate and endDate. Please check:
id | date | price | minutes | prefixId | sellerId | routeTypeId
1234 2020-01-01 0.123 0 1 1 1
1235 2020-01-04 0.123 0 1 1 1
1236 2020-01-05 0.123 123 1 1 1
1237 2020-01-06 0.123 31 1 1 1
1238 2020-01-07 0.123 23 1 1 1
1239 2020-01-08 0.130 41 1 2 1
1240 2020-01-09 0.130 0 1 1 1
What I am looking for is:
id | startDate | endDate | price | minutes | prefixId | sellerId | routeTypeId
1234 2020-01-01 2020-01-01 0.123 0 1 1 1
1235 2020-01-04 2020-01-07 0.123 0 1 1 1
1239 2020-01-08 2020-01-08 0.130 41 1 2 1
1240 2020-01-09 2020-01-09 0.130 0 1 2 2
Dates will be considered in a series if price, prefixId, sellerId, routeTypeId will remain same with previous row and date column generates a series (without any gap between dates. So, 2020-01-01, 2020-01-2, 2020-01-10 are two different series for example)

This is a gaps-and-islands problem. You can use lag() and a cumulative sum:
select price, prefixId, sellerId, routeTypeId,
min(minutes),
min(date), max(date)
from (select t.*,
sum(case when prev_date = date - interval '1 day' then 0 else 1 end) over (order by date) as grp
from (select t.*,
lag(date) over (partition by price, prefixId, sellerId, routeTypeId order by date) as prev_date
from t
) t
) t
group by grp, price, prefixId, sellerId, routeTypeId

This is a "Gaps & Islands" problem. You can do it using:
select
min(id) as id,
min(date) as start_date,
max(date) as end_date,
min(price) as price,
...
from (
select *,
sum(inc) over(order by id) as grp
from (
select *,
case when price = lag(price) over(order by id)
and date = lag(date) over(
partition by price, prefixId, sellerId, routeTypeId
order by id)
+ interval '1 day'
then 0 else 1 end as inc
from t
) x
) y
group by grp

Related

Add a counting condition into dense_rank window Function SQL

I have a function that counts how many times you've visited and if you have converted or not.
What I'd like is for the dense_rank to re-start the count, if there has been a conversion:
SELECT
uid,
channel,
time,
conversion,
dense_rank() OVER (PARTITION BY uid ORDER BY time asc) as visit_order
FROM table
current table output:
this customer (uid) had a conversion at visit 18 and now I would want the visit_order count from dense_rank to restart at 0 for the same customer until it hits the next conversion that is non-null.
See this (I do not like "try this" 😉):
SELECT
id,
ts,
conversion,
-- SC,
ROW_NUMBER() OVER (PARTITION BY id,SC) R
FROM (
SELECT
id,
ts,
conversion,
-- COUNT(conversion) OVER (PARTITION BY id, conversion=0 ORDER BY ts ) CC,
SUM(CASE WHEN conversion=1 THEN 1000 ELSE 1 END) OVER (PARTITION BY id ORDER BY ts ) - SUM(CASE WHEN conversion=1 THEN 1000 ELSE 1 END) OVER (PARTITION BY id ORDER BY ts )%1000 SC
FROM sample
ORDER BY ts
) x
ORDER BY ts;
DBFIDDLE
output:
id
ts
conversion
R
1
2022-01-15 10:00:00
0
1
1
2022-01-16 10:00:00
0
2
1
2022-01-17 10:00:00
0
3
1
2022-01-18 10:00:00
1
1
1
2022-01-19 10:00:00
0
2
1
2022-01-20 10:00:00
0
3
1
2022-01-21 10:00:00
0
4
1
2022-01-22 10:00:00
0
5
1
2022-01-23 10:00:00
0
6
1
2022-01-24 10:00:00
0
7
1
2022-01-25 10:00:00
1
1
1
2022-01-26 10:00:00
0
2
1
2022-01-27 10:00:00
0
3

ROW_NUMBER() Based on Dates

I have the following data:
test_date
2018-07-01
2018-07-02
...
2019-06-30
2019-07-01
2019-07-02
...
2020-06-30
2020-07-01
I want to increment a row_number value every time right(test_date,5) = '07-01' so that my final result looks like this:
test_date row_num
2018-07-01 1
2018-07-02 1
... 1
2019-06-30 1
2019-07-01 2
2019-07-02 2
... 2
2020-06-30 2
2020-07-01 3
I tried doing something like this:
, ROW_NUMBER() OVER (
PARTITION BY CASE WHEN RIGHT(a.[test_date],5) = '07-01' THEN 1 ELSE 0 END
ORDER BY a.[test_date]
) AS [test2]
But that did not work out for me.
Any suggestions?
Use datepart to identify the correct date, and then add 1 to a sum every time it changes (assuming there will never be more than 1 row per date).
declare #Test table (test_date date);
insert into #Test (test_date)
values
('2018-07-01'),
('2018-07-02'),
('2019-06-30'),
('2019-07-01'),
('2019-07-02'),
('2020-06-30'),
('2020-07-01');
select *
, sum(case when datepart(month,test_date) = 7 and datepart(day,test_date) = 1 then 1 else 0 end) over (order by test_date asc) row_num
from #Test
order by test_date asc;
Returns:
test_date
row_num
2018-07-01
1
2018-07-02
1
2019-06-30
1
2019-07-01
2
2019-07-02
2
2020-06-30
2
2020-07-01
3
You can do it with DENSE_RANK() window function if you subtract 6 months from your dates:
SELECT test_date,
DENSE_RANK() OVER (ORDER BY YEAR(DATEADD(month, -6, test_date))) row_num
FROM tablename
See the demo.
Results:
test_date | row_num
---------- | -------
2018-07-01 | 1
2018-07-02 | 1
2019-06-30 | 1
2019-07-01 | 2
2019-07-02 | 2
2020-06-30 | 2
2020-07-01 | 3
build a running total based on month=7 and day=2
declare #Test table (mykey int,test_date date);
insert into #Test (mykey,test_date)
values
(1,'2018-07-01'),
(2,'2018-07-02'),
(3,'2019-06-30'),
(4,'2019-07-01'),
(5,'2019-07-02'),
(6,'2020-06-30'),
(7,'2020-07-01');
select mykey,test_date,
sum(case when DatePart(Month,test_date)=7 and DatePart(Day,test_date)=2 then 1 else 0 end) over (order by mykey) RunningTotal from #Test
order by mykey

SQL select students logins

I have a table STUDENT_LAST_LOGIN, which contains data about students last logins.
ID STUDENT_ID DATE TIME
1 A 2020-02-01 12:00 15 MIN
2 B 2020-02-02 12:00 45 MIN
3 C 2020-02-03 12:00 25 MIN
In addition there is STUDENT_LOGIN table, which contains data about students all logins.
ID STUDENT_ID DATE TIME
1 A 2020-02-01 12:00 15 MIN
4 A 2020-01-01 14:00 33 MIN
2 B 2020-02-02 12:00 45 MIN
5 B 2020-01-02 13:30 47 MIN
10 B 2020-01-03 13:30 27 MIN
6 B 2020-01-02 10:00 44 MIN
3 C 2020-02-03 12:00 25 MIN
7 C 2020-01-03 10:00 12 MIN
8 C 2020-01-03 18:00 56 MIN
9 C 2020-01-04 12:00 88 MIN
As a result I need to get something like this:
STUDENT_ID LAST_LOGIN LAST_LOGIN_ONE_MONTH_AGO TIME TIME_ONE_MONTH_AGO
A 2020-02-01 12:00 2020-01-01 14:00 15 min 33 min
B 2020-02-02 12:00 2020-01-02 13:30 15 min 47 min
C 2020-02-03 12:00 2020-01-03 18:00 25 min 56 min
Can you help me write this?
SELECT LAST_LOGIN, LAST_LOGIN_ONE_MONTH_AGO, S_L.TIME, S_L.TIME_ONE_MONTH_AGO
FROM STUDENT_LAST_LOGIN S_L_L
INNER JOIN STUDENT_LOGIN S_L on S_L_L.id = S_L.id
where S_L_L.date < DATEADD(month, -1, GETDATE())
you need to write your query something like this.
You need to use the windows function as follows:
SELECT * FROM
(SELECT SLL.STUDENT_ID,
SLL.DATE LAST_LOGIN,
SL.DATE LAST_LOGIN_ONE_MONTHE_AGO,
SLL.TIME,
SL.TIME TIME_ONE_MONTH_AGO,
ROW_NUMBER() OVER (PARTITION BY SLL.STUDENT_ID ORDER BY SL.DATE DESC NULLS LAST) AS RN
FROM STUDENT_LAST_LOGIN SLL LEFT JOIN STUDENT_LOGIN SL
ON SL.STUDENT_ID = SLL.STUDENT_ID
AND TRUNC(SL.DATE) = ADD_MONTHS(TRUNC(SLL.DATE),-1)
)
WHERE RN = 1
I can only speculate that you want the most recent login and then the most recent login from the calendar month before that. I would suggest conditional aggregation:
select sll.student_id,
max(case when month_seqnum = 1 then last_login end),
max(case when month_seqnum = 2 then last_login end),
max(case when month_seqnum = 1 then time end),
max(case when month_seqnum = 2 then time end)
from (select sll.*,
row_number() over (partition by student_id, to_char(date, 'YYYY-MM')
order by date desc
) as seqnum,
dense_rank() over (partition by student_id order by to_char(date, 'YYYY-MM')) as month_seqnum
from student_last_login sll
) sll
where month_seqnum in (1, 2) and seqnum = 1
group by student_id;
I think this returns the values that you specify.

Count and pivot a table by date

I would like to identify the returning customers from an Oracle(11g) table like this:
CustID | Date
-------|----------
XC321 | 2016-04-28
AV626 | 2016-05-18
DX970 | 2016-06-23
XC321 | 2016-05-28
XC321 | 2016-06-02
So I can see which customers returned within various windows, for example within 10, 20, 30, 40 or 50 days. For example:
CustID | 10_day | 20_day | 30_day | 40_day | 50_day
-------|--------|--------|--------|--------|--------
XC321 | | | 1 | |
XC321 | | | | 1 |
I would even accept a result like this:
CustID | Date | days_from_last_visit
-------|------------|---------------------
XC321 | 2016-05-28 | 30
XC321 | 2016-06-02 | 5
I guess it would use a partition by windowing clause with unbounded following and preceding clauses... but I cannot find any suitable examples.
Any ideas...?
Thanks
No need for window functions here, you can simply do it with conditional aggregation using CASE EXPRESSION :
SELECT t.custID,
COUNT(CASE WHEN (last_visit- t.date) <= 10 THEN 1 END) as 10_day,
COUNT(CASE WHEN (last_visit- t.date) between 11 and 20 THEN 1 END) as 20_day,
COUNT(CASE WHEN (last_visit- t.date) between 21 and 30 THEN 1 END) as 30_day,
.....
FROM (SELECT s.custID,
LEAD(s.date) OVER(PARTITION BY s.custID ORDER BY s.date DESC) as last_visit
FROM YourTable s) t
GROUP BY t.custID
Oracle Setup:
CREATE TABLE customers ( CustID, Activity_Date ) AS
SELECT 'XC321', DATE '2016-04-28' FROM DUAL UNION ALL
SELECT 'AV626', DATE '2016-05-18' FROM DUAL UNION ALL
SELECT 'DX970', DATE '2016-06-23' FROM DUAL UNION ALL
SELECT 'XC321', DATE '2016-05-28' FROM DUAL UNION ALL
SELECT 'XC321', DATE '2016-06-02' FROM DUAL;
Query:
SELECT *
FROM (
SELECT CustID,
Activity_Date AS First_Date,
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '10' DAY FOLLOWING )
- 1 AS "10_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '20' DAY FOLLOWING )
- 1 AS "20_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '30' DAY FOLLOWING )
- 1 AS "30_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '40' DAY FOLLOWING )
- 1 AS "40_Day",
COUNT(1) OVER ( PARTITION BY CustID
ORDER BY Activity_Date
RANGE BETWEEN CURRENT ROW AND INTERVAL '50' DAY FOLLOWING )
- 1 AS "50_Day",
ROW_NUMBER() OVER ( PARTITION BY CustID ORDER BY Activity_Date ) AS rn
FROM Customers
)
WHERE rn = 1;
Output
USTID FIRST_DATE 10_Day 20_Day 30_Day 40_Day 50_Day RN
------ ------------------- ---------- ---------- ---------- ---------- ---------- ----------
AV626 2016-05-18 00:00:00 0 0 0 0 0 1
DX970 2016-06-23 00:00:00 0 0 0 0 0 1
XC321 2016-04-28 00:00:00 0 0 1 2 2 1
Here is an answer that works for me, I have based it on your answers above, thanks for contributions from MT0 and Sagi:
SELECT CustID,
visit_date,
Prev_Visit ,
COUNT( CASE WHEN (Days_between_visits) <=10 THEN 1 END) AS "0-10_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 11 AND 20 THEN 1 END) AS "11-20_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 21 AND 30 THEN 1 END) AS "21-30_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 31 AND 40 THEN 1 END) AS "31-40_day" ,
COUNT( CASE WHEN (Days_between_visits) BETWEEN 41 AND 50 THEN 1 END) AS "41-50_day" ,
COUNT( CASE WHEN (Days_between_visits) >50 THEN 1 END) AS "51+_day"
FROM
(SELECT CustID,
visit_date,
Lead(T1.visit_date) over (partition BY T1.CustID order by T1.visit_date DESC) AS Prev_visit,
visit_date - Lead(T1.visit_date) over (
partition BY T1.CustID order by T1.visit_date DESC) AS Days_between_visits
FROM T1
) T2
WHERE Days_between_visits >0
GROUP BY T2.CustID ,
T2.visit_date ,
T2.Prev_visit ,
T2.Days_between_visits;
This returns:
CUSTID | VISIT_DATE | PREV_VISIT | DAYS_BETWEEN_VISIT | 0-10_DAY | 11-20_DAY | 21-30_DAY | 31-40_DAY | 41-50_DAY | 51+DAY
XC321 | 2016-05-28 | 2016-04-28 | 30 | | | 1 | | |
XC321 | 2016-06-02 | 2016-05-28 | 5 | 1 | | | | |

window function in redshift

I have some data that looks like this:
CustID EventID TimeStamp
1 17 1/1/15 13:23
1 17 1/1/15 14:32
1 13 1/1/25 14:54
1 13 1/3/15 1:34
1 17 1/5/15 2:54
1 1 1/5/15 3:00
2 17 2/5/15 9:12
2 17 2/5/15 9:18
2 1 2/5/15 10:02
2 13 2/8/15 7:43
2 13 2/8/15 7:50
2 1 2/8/15 8:00
I'm trying to use the row_number function to get it to look like this:
CustID EventID TimeStamp SeqNum
1 17 1/1/15 13:23 1
1 17 1/1/15 14:32 1
1 13 1/1/25 14:54 2
1 13 1/3/15 1:34 2
1 17 1/5/15 2:54 3
1 1 1/5/15 3:00 4
2 17 2/5/15 9:12 1
2 17 2/5/15 9:18 1
2 1 2/5/15 10:02 2
2 13 2/8/15 7:43 3
2 13 2/8/15 7:50 3
2 1 2/8/15 8:00 4
I tried this:
row_number () over
(partition by custID, EventID
order by custID, TimeStamp asc) SeqNum]
but got this back:
CustID EventID TimeStamp SeqNum
1 17 1/1/15 13:23 1
1 17 1/1/15 14:32 2
1 13 1/1/25 14:54 3
1 13 1/3/15 1:34 4
1 17 1/5/15 2:54 5
1 1 1/5/15 3:00 6
2 17 2/5/15 9:12 1
2 17 2/5/15 9:18 2
2 1 2/5/15 10:02 3
2 13 2/8/15 7:43 4
2 13 2/8/15 7:50 5
2 1 2/8/15 8:00 6
how can I get it to sequence based on the change in the EventID?
This is tricky. You need a multi-step process. You need to identify the groups (a difference of row_number() works for this). Then, assign an increasing constant to each group. And then use dense_rank():
select sd.*, dense_rank() over (partition by custid order by mints) as seqnum
from (select sd.*,
min(timestamp) over (partition by custid, eventid, grp) as mints
from (select sd.*,
(row_number() over (partition by custid order by timestamp) -
row_number() over (partition by custid, eventid order by timestamp)
) as grp
from somedata sd
) sd
) sd;
Another method is to use lag() and a cumulative sum:
select sd.*,
sum(case when prev_eventid is null or prev_eventid <> eventid
then 1 else 0 end) over (partition by custid order by timestamp
) as seqnum
from (select sd.*,
lag(eventid) over (partition by custid order by timestamp) as prev_eventid
from somedata sd
) sd;
EDIT:
The last time I used Amazon Redshift it didn't have row_number(). You can do:
select sd.*, dense_rank() over (partition by custid order by mints) as seqnum
from (select sd.*,
min(timestamp) over (partition by custid, eventid, grp) as mints
from (select sd.*,
(row_number() over (partition by custid order by timestamp rows between unbounded preceding and current row) -
row_number() over (partition by custid, eventid order by timestamp rows between unbounded preceding and current row)
) as grp
from somedata sd
) sd
) sd;
Try this code block:
WITH by_day
AS (SELECT
*,
ts::date AS login_day
FROM table_name)
SELECT
*,
login_day,
FIRST_VALUE(login_day) OVER (PARTITION BY userid ORDER BY login_day , userid rows unbounded preceding) AS first_day
FROM by_day