How to partition the following table in DB2 - sql

I am trying to partition and order the following table, where I have used all sorts of row_number() over() and dense_rank() over() combinations but am not getting what I need.
The MWE table is as follows:
Person Visit Last_Visit Gap_1_yr
------ ----- -------- --------
1 01/01/2001 01/01/2000 NULL
1 01/01/2003 01/01/2001 gap
1 01/01/2004 01/01/2003 NULL
1 01/01/2006 01/01/2004 gap
2 01/01/2005 01/01/2002 gap
2 01/01/2010 01/01/2005 gap
where a person turns up for an appointment, and if the persons next appointment is > 365 days from their previous appointment (I used a lag function for this).
What I want is, whenever there is a gap, I want to partition so I have the following:
Person Visit Last_Visit Gap_1_yr SEQ
------ ----- -------- -------- ---
1 01/01/2001 01/01/2000 NULL 1
1 01/01/2003 01/01/2001 gap 2
1 01/01/2004 01/01/2003 NULL 2
1 01/01/2006 01/01/2004 gap 3
2 01/01/2005 01/01/2002 gap 1
2 01/01/2010 01/01/2005 gap 2
You see that when there is a gap, the sequence iterates by one until the next gap - all per person.
I have tried:
row_number() over(partition by person order by gap)
but this iterates for every cell in SEQ until finding a new person -ignores gaps
and have tried:
dense_rank() over(partition by person order by gap)
returns 1's in every cell in SEQ
dense_rank() over(partition by person,gap order by gap)
also returns all 1's.
does anyone have any suggestions?

Convert the gap to a flag. Then use sum() to do a cumulative sum of the flag:
select mwe.*,
sum(case when gap_1_yr = 'gap' then 1 else 0 end) over
(partition by person order by visit)
) as seq
from mwe;

Related

Add a counter based consecutive dates

I have an employee table with employee name and the dates when the employee was on leave. My task is to identify employees who have takes 3 or 5 consecutive days of leave. I tried to add a row_number but it wouldn't restart correct based on the consecutive dates. The desired counter I am after is shown below. Any suggestions please?
Employee Leave Date Desired Counter
John 25-Jan-20 1
John 26-Jan-20 2
John 27-Jan-20 3
John 28-Jan-20 4
John 15-Mar-20 1
John 16-Mar-20 2
Mary 12-Feb-20 1
Mary 13-Feb-20 2
Mary 20-Apr-20 1
Desired output (same as in text)
This is a gaps and island problem: islands represents consecutive days of leaves, and you want to enumerate the rows of each island.
Here is an approach that uses the date difference against a monotonically increasing counter to build the groups:
select t.*,
row_number() over(
partition by employee, dateadd(day, -rn, leave_date)
order by leave_date
) counter
from (
select t.*,
row_number() over(partition by employee order by leave_date) rn
from mytable t
) t
order by employee, leave_date
Demo on DB Fiddle

Count the number of transactions per month for an individual group by date Hive

I have a table of customer transactions where each item purchased by a customer is stored as one row. So, for a single transaction there can be multiple rows in the table. I have another col called visit_date.
There is a category column called cal_month_nbr which ranges from 1 to 12 based on which month transaction occurred.
The data looks like below
Id visit_date Cal_month_nbr
---- ------ ------
1 01/01/2020 1
1 01/02/2020 1
1 01/01/2020 1
2 02/01/2020 2
1 02/01/2020 2
1 03/01/2020 3
3 03/01/2020 3
first
I want to know how many times customer visits per month using their visit_date
i.e i want below output
id cal_month_nbr visit_per_month
--- --------- ----
1 1 2
1 2 1
1 3 1
2 2 1
3 3 1
and what is the avg frequency of visit per ids
ie.
id Avg_freq_per_month
---- -------------
1 1.33
2 1
3 1
I tried with below query but it counts each item as one transaction
select avg(count_e) as num_visits_per_month,individual_id
from
(
select r.individual_id, cal_month_nbr, count(*) as count_e
from
ww_customer_dl_secure.cust_scan
GROUP by
r.individual_id, cal_month_nbr
order by count_e desc
) as t
group by individual_id
I would appreciate any help, guidance or suggestions
You can divide the total visits by the number of months:
select individual_id,
count(*) / count(distinct cal_month_nbr)
from ww_customer_dl_secure.cust_scan c
group by individual_id;
If you want the average number of days per month, then:
select individual_id,
count(distinct visit_date) / count(distinct cal_month_nbr)
from ww_customer_dl_secure.cust_scan c
group by individual_id;
Actually, Hive may not be efficient at calculating count(distinct), so multiple levels of aggregation might be faster:
select individual_id, avg(num_visit_days)
from (select individual_id, cal_month_nbr, count(*) as num_visit_days
from (select distinct individual_id, visit_date, cal_month_nbr
from ww_customer_dl_secure.cust_scan c
) iv
group by individual_id, cal_month_nbr
) ic
group by individual_id;

Most Efficient SQL to Calculate Running Streak Occurrences

I am looking for the most efficient manner to determine the longest occurrence of a streak within a given data set; specifically, to determine the longest winning streak of games.
Below is the SQL that I have thus far, and it does seem to perform very fast, and as expected from the limited testing I've done on a dataset with around 100,000 records.
DECLARE #HistoryDateTimeLimit datetime = '3/15/2018';
CTE to create result subset from voting dataset.
WITH Results AS (
SELECT
EntityPlayerId,
(CASE
WHEN VoteTeamA = 1 AND ParticipantAScore > ParticipantBScore THEN 'W'
WHEN VoteTeamA = 0 AND ParticipantBScore > ParticipantAScore THEN 'W'
ELSE 'L'
END) AS WinLoss,
match.ScheduledStartDateTime
FROM
[dbo].[MatchVote] vote
INNER JOIN [dbo].[MatchMetaData] match ON vote.MatchId = match.MatchId
WHERE
IsComplete = 1
AND ScheduledStartDateTime >= #HistoryDateTimeLimit
)
CTE to create subset of data with streak type as WinLoss and total count of votes in the partition using ROW_NUMBER().
WITH Streaks AS (
SELECT
EntityPlayerId,
ScheduledStartDateTime,
WinLoss,
ROW_NUMBER() OVER (PARTITION BY EntityPlayerId ORDER BY ScheduledStartDateTime) -
ROW_NUMBER() OVER (PARTITION BY EntityPlayerId, WinLoss ORDER BY ScheduledStartDateTime) AS Streak
FROM
Results
)
CTE to summarize the partitioned vote streaks by WinLoss and a begin date/time, with the total count in the streak.
WITH StreakCounts AS (
SELECT
EntityPlayerId,
WinLoss,
MIN(ScheduledStartDateTime) StreakStart,
MAX(ScheduledStartDAteTime) StreakEnd,
COUNT(*) as Streak
FROM
Streaks
GROUP BY
EntityPlayerId, WinLoss, Streak
)
CTE to select the MAXIMUM (longest) vote streak for WinLoss of W (win) grouped by players.
WITH LongestWinStreak AS (
SELECT
EntityPlayerId,
MAX(Streak) AS LongestStreak
FROM
StreakCounts
WHERE
WinLoss = 'W'
GROUP BY
EntityPlayerId
)
Selecting the useful data from the LongestWinStreak CTE.
SELECT * FROM LongestWinStreak
This is the 3rd iteration of the code; at first I feel like I was overthinking and using windows with the LAG function to define a reset period that was later used for partitioning.
[UPDATE]: SQLFiddle example at http://sqlfiddle.com/#!18/5b33a/1 -- Sample data for the two tables that are used above are as follows.
The data is meant to show the schema, and can be extrapolated for your own testing/usage;
MatchVote table data.
EntityPlayerId IsExtMatch MatchId VoteTeamA VoteDateTime IsComplete
-------------------- ------------ -------------------- --------- ----------------------- ----------
158 1 152639 0 2018-03-20 23:25:28.910 1
158 1 156058 1 2018-03-13 23:36:57.517 1
MatchMetaData table data.
MatchId IsTeamTournament MatchCompletedDateTime ScheduledStartDateTime MatchIsFinalized TournamentId TournamentTitle TournamentLogoUrl TournamentLogoThumbnailUrl GameName GameShortCode GameLogoUrl ParticipantAScore ParticipantAName ParticipantALogoUrl ParticipantBScore ParticipantBName ParticipantBLogoUrl
--------- ---------------- ----------------------- ----------------------- ---------------- -------------------- ------------------ ----------------------- ---------------------------- --------------------------------- -------------- ----------------------- ------------------ ------------------- --------------------- ----------------- ------------------- --------------------
23354 1 2014-07-30 00:30:00.000 2014-07-30 00:00:00.000 1 543 Sample https://...Small.png https://...Small.png Dota 2 Dota 2 https://...logo.png 3 Natus Vincere.US https://...VI.png 0 Not Today https://...ay.png
44324 1 2014-12-15 12:40:00.000 2014-12-15 11:40:00.000 1 786 Sample https://...Small.png https://...Small.png Counter-Strike: Global Offensive CS:GO https://...logo.png 0 Avalier's stars https://...oto.png 1 Kassad's Legends https://...oto.png

Get MAX count but keep the repeated calculated value if highest

I have the following table, I am using SQL Server 2008
BayNo FixDateTime FixType
1 04/05/2015 16:15:00 tyre change
1 12/05/2015 00:15:00 oil change
1 12/05/2015 08:15:00 engine tuning
1 04/05/2016 08:11:00 car tuning
2 13/05/2015 19:30:00 puncture
2 14/05/2015 08:00:00 light repair
2 15/05/2015 10:30:00 super op
2 20/05/2015 12:30:00 wiper change
2 12/05/2016 09:30:00 denting
2 12/05/2016 10:30:00 wiper repair
2 12/06/2016 10:30:00 exhaust repair
4 12/05/2016 05:30:00 stereo unlock
4 17/05/2016 15:05:00 door handle repair
on any given day need do find the highest number of fixes made on a given bay number, and if that calculated number is repeated then it should also appear in the resultset
so would like to see the result set as follows
BayNo FixDateTime noOfFixes
1 12/05/2015 00:15:00 2
2 12/05/2016 09:30:00 2
4 12/05/2016 05:30:00 1
4 17/05/2016 15:05:00 1
I manage to get the counts of each but struggling to get the max and keep the highest calculated repeated value. can someone help please
Use window functions.
Get the count for each day by bayno and also find the min fixdatetime for each day per bayno.
Then use dense_rank to compute the highest ranked row for each bayno based on the number of fixes.
Finally get the highest ranked rows.
select distinct bayno,minfixdatetime,no_of_fixes
from (
select bayno,minfixdatetime,no_of_fixes
,dense_rank() over(partition by bayno order by no_of_fixes desc) rnk
from (
select t.*,
count(*) over(partition by bayno,cast(fixdatetime as date)) no_of_fixes,
min(fixdatetime) over(partition by bayno,cast(fixdatetime as date)) minfixdatetime
from tablename t
) x
) y
where rnk = 1
Sample Demo
You are looking for rank() or dense_rank(). I would right the query like this:
select bayno, thedate, numFixes
from (select bayno, cast(fixdatetime) as date) as thedate,
count(*) as numFixes,
rank() over (partition by cast(fixdatetime as date) order by count(*) desc) as seqnum
from t
group by bayno, cast(fixdatetime as date)
) b
where seqnum = 1;
Note that this returns the date in question. The date does not have a time component.

Generate sequence based on the value in the previous row and current row

I have the below table having student information.
S_ID Group_ID Date Score
12345 1 1/1/2015 1
12345 1 2/1/2015 2
12345 1 3/1/2015 4
12345 1 4/1/2015 5
12345 1 9/1/2015 3
12345 1 10/1/2015 8
12345 2 1/1/2015 2
12345 2 2/1/2015 4
12345 2 3/1/2015 6
I want to generate a new table based for few students after adding a sequence column as shown below
S_ID Group_ID Date Score Sequence
12345 1 1/1/2015 1 1
12345 1 2/1/2015 2 2
12345 1 3/1/2015 4 3
12345 1 4/1/2015 5 4
12345 1 9/1/2015 3 3
12345 1 10/1/2015 8 4
12345 2 1/1/2015 2 2
12345 2 2/1/2015 4 3
12345 2 3/1/2015 6 4
Rules:
Sequence should be generated for each combination of S_ID, Group_I
For the first record, sequence number will be same as the Score
2nd record onwards, this will be 1 + the previous sequence number
if the difference between the date of the previous row and current row is
more than 100 days, sequence number will be restarted (same as the
Score for that record)
This is a large table and I am looking for the most optimized SQL. Any help would be greatly appreciated
The trick here is to find where the sequence numbers start over. This is for new students, groups, and when the previous date has too big a gap. For the latter, you can use lag() to calculate a "new dates start flag" and then aggregate this to get a grouping.
select t.*,
(first_value(score) over (partition by s_id, group_id, grp order by date) +
row_number() over (partition by s_id, group_id, grp order by date) - 1
) as sequence
from (select t.*,
sum(case when prev_date is null or prev_date < date - 100
then 1 else 0
end) over (partition by s_id, group_id order by date) as grp
from (select t.*,
lag(date) over (partition by s_id, group_id order by date) as prev_date
from t
) t
) t;