Running total with Over - sql

I'm trying to create a running total of the number of files per opened by day so I can use the data for a graph showing cumulative results.
The data is basically the file opening date, a calculated field showing 'This month' or 'Last Month' depending on the date and the running total field that I'm trying to figure out.
Date Month Count
==== ===== =====
2019-08-01 Last Month 6
2019-08-02 Last Month 2
2019-08-03 Last Month 5
I want to have a running total...so 6, 8, 13 etc
But all I'm getting is a row count (1,2,3 etc) for my count field.
select
FileDate,
Month,
sum(Count) OVER(PARTITION BY month order by Filedate) as 'Count'
from (
select
1 as 'Count',
Case
When month(cast(concat(right(d.var_val,4),substring(d.var_val,4,2),left(d.var_val,2)) as DATE) ) = Month(getdate()) then 'This Month'
else 'Last Month'
end as 'Month'
FROM data d
left join otherdata m on d.VAR_FileID = m.MAT_FileID
left join otherdata u on m.MAT_Fee_Earner = u.User_ID
left join otherdata br on m.MAT_BranchID = br.BR_ID
WHERE d.var_no IN ( '1628' )
and Len(var_val) = 10
)files
where Month(FileDate) in (MONTH(FileDate()),MONTH(getDate())-1)
and Year(Filedate) = Year(Getdate())
and Dept = 'Peterborough Property'
group by Month, FileDate, count
GO
I'm assuming I've not quite grasped the proper usage of 'OVER' - any pointers would be great!

The Partition clause indicates when to reset the count, so by partitioning by month you are only counting records for each discreet month to get a running total, over the whole dataset, you don't want the partition clause at all, just the order by clause.

Hope your clear with OVER clause now (with "Sentinel" answer), in which case you should replace desired column as follows, so that count continuously increase for all the rows from sub-query based on order by clause: for more details on OVER Clause..
sum(Count) OVER (Oder by Filedate) as [Count]
-- or
sum(Count) OVER (Oder by Filedate desc) as [Count]

Related

Select rows for last n days after event occurs

I have the following table and data:
PatientID PatientName Diagnosed ReportDate ...
1 0
1 0
1 0
1 1
So there are multiple rows for each patient, as the reports come few times a day.
Whenever the diagnosed field is changed to 1, for that patient, I'd like to get the past 3 days of data . So when Diagnosed ==1, get report time -3 days of data for each patient.
SELECT Patients.ReportDate
FROM Patients
WHERE Diagnosed = 1 and date > ReportDate - interval '3' day;
So getting the past 3 days of data, can be done with ReportDate - interval time, but how do I specify that for every patient (since multiple ids can be for that patient) based on the diagnosed field?
I usually do this filtering after getting csvs in python, but the data set is too large, so I'd like to filter before I convert them to dataframes.
You can look at this another way, which is whether diagnosed = 1 in the next three days -- and take all rows where that is true:
select p.*
from (select p.*,
count(*) filter (where diagnosed = 1) over (partition by patientId order by reportDate range between interval '0 day' following and interval '3 day' following) as cnt_diagnosed_3
from patients p
) p
where cnt_diagnosed_3 > 0
order by patientId, reportDate;
Whenever the diagnosed field is changed to 1, for that patient, I'd like to get the past 3 days of data.
SELECT (p).*
FROM (
SELECT p
, diagnosed
, bool_or(diagnosed = 1) OVER (w RANGE BETWEEN CURRENT ROW AND '3 days' FOLLOWING) AS in_range
, lag(diagnosed) OVER w AS last_diagnosed
FROM patients p
WINDOW w AS (PARTITION BY patientid ORDER BY reportdate)
) sub
WHERE diagnosed = 0 AND in_range
OR diagnosed = 1 AND last_diagnosed = 0
ORDER BY patientid, reportdate;
db<>fiddle here
Returns the "past 3 days of data" where the "field is changed to 1" (previous row had "0").
The WINDOW clause is just syntactic sugar to avoid spelling out the same window definition repeatedly. (No additional benefit for performance.)
SELECT p in the innermost subquery is a neat way to get the whole row. The outer SELECT (p).* returns complete rows without auxiliary columns added in the subquery. This way we get whole rows without spelling out all columns (or even needing to know all of them).
RANGE distance PRECEDING/FOLLOWING requires Postgres 11 or later.
Here is a slower alternative that also works for older versions:
SELECT p.*
FROM (
SELECT patientid, reportdate
FROM (
SELECT patientid, reportdate, diagnosed
, lag(diagnosed) OVER (PARTITION BY patientid ORDER BY reportdate) AS last_diagnosed
FROM patients
) p0
WHERE diagnosed = 1
AND last_diagnosed = 0
) d
JOIN patients p USING (patientid)
WHERE p.reportdate BETWEEN d.reportdate - interval '3 days' AND d.reportdate
ORDER BY p.patientid, p.reportdate;
Subquery d select rows where Diagnosed just switched to 1. Then self-join to select your time frame.
For gaps-and-islands basics, see:
Select longest continuous sequence
You also added:
So when Diagnosed ==1, get report time -3 days of data for each patient.
That's a wider definition, and that's what Gordon's query does. Goes to show the importance of an exact definition of requirements.

SQL - calculating hours since the earliest date in a partition

I have the following SQL code:
select
survey.ContactId,
survey.CommId,
survey.CommCreatedDate,
survey.CommIdStatus,
br.[Value],
null as HoursPastSinceFirstActiveSurvey,
row_number() over (partition by survey.ContactId order by survey.CommCreatedDate desc) as [row]
from
Survey_Completed survey
inner join
Business_Rules br on br.Name = 'OPT_OUT_TIME'
where
survey.CommIdStatus = 'Active'
Which produces the following result set:
What I need help with is filling out HoursPastSinceFirstActiveSurvey. The logic here should be as follows:
Calculate the total number of hours that has passed since the earliest (by CommCreatedDate) record in the partition for consecutive (by day) records. In order to address the "consecutive" part, I was thinking perhaps it might be possible to add to the partitioning logic to only partition if the days are consecutive. I'm not entirely sure if that's possible though. So for example, look at the last two records. They are grouped as a partition and the dates are consecutive and the earliest date/time on this partition is Nov 11 2020 12:00 AM. So I would want to perform the following in order to populate HoursPastSinceFirstActiveSurvey for these two records:
Today's date minus Nov 11 2020 12:00 AM.
This would be the value for those two records in the partition for HoursPastSinceFirstActiveSurvey. I am not sure where to even start with this!! Thank you all.
I was able to solve for this by the following query. Feedback is entirely WELCOME!
select
Q2.ContactId,
min(Q2.CommCreatedDate) as MinDate,
max(Q2.CommCreatedDate) as MaxDate,
Q2.Consecutive,
datediff(hour, min(Q2.CommCreatedDate), max(Q2.CommCreatedDate)) AS HoursPassed
from
(select
Q1.ContactId,
Q1.CommId,
Q1.CommCreatedDate,
Q1.CommIdStatus,
Q1.[Value],
Q1.Consecutive,
Q1.[row],
Q1.countOfPartition
from
(select
survey.ContactId,
survey.CommId,
survey.CommCreatedDate,
survey.CommIdStatus,
br.[Value],
CAST(dateadd(day,-row_number() over (partition by survey.ContactId order by survey.CommCreatedDate), survey.CommCreatedDate) as Date) as Consecutive,
row_number() over (partition by survey.ContactId order by survey.CommCreatedDate desc) as [row],
count(*) over (partition by survey.ContactId) as countOfPartition
from
Survey_Completed survey
inner join
Business_Rules br on br.Name = 'OPT_OUT_TIME'
where
survey.CommIdStatus = 'Active') Q1
where
Q1.countOfPartition <> 1) Q2
group by
Q2.ContactId, Q2.Consecutive, Q2.[Value]
having
datediff(hour, min(Q2.CommCreatedDate), max(Q2.CommCreatedDate)) > Q2.[Value]

Finding id's available in previous weeks but not in current week

How to find if an id which was present in previous weeks but not available in current week on a rolling basis. For e.g
Week1 has id 1,2,3,4,5
Week2 has id 3,4,5,7,8
Week3 has id 1,3,5,10,11
So I found out that id 1 and 2 are missing in week 2 and id 2,4,7,8 are missing in week 3 from previous 2 weeks But how to do this on a rolling window for a large amount of data distributed over a period of 20+ years
Please find the sample dataset and expected output. I am expecting the output to be partitioned based on the week_end Date
Dataset
ID|WEEK_START|WEEK_END|APPEARING_DATE
7152|2015-12-27|2016-01-02|2015-12-27
8350|2015-12-27|2016-01-02|2015-12-27
7152|2015-12-27|2016-01-02|2015-12-29
4697|2015-12-27|2016-01-02|2015-12-30
7187|2015-12-27|2016-01-02|2015-01-01
8005|2015-12-27|2016-01-02|2015-12-27
8005|2015-12-27|2016-01-02|2015-12-29
6254|2016-01-03|2016-01-09|2016-01-03
7962|2016-01-03|2016-01-09|2016-01-04
3339|2016-01-03|2016-01-09|2016-01-06
7834|2016-01-03|2016-01-09|2016-01-03
7962|2016-01-03|2016-01-09|2016-01-05
7152|2016-01-03|2016-01-09|2016-01-07
8350|2016-01-03|2016-01-09|2016-01-09
2403|2016-01-10|2016-01-16|2016-01-10
0157|2016-01-10|2016-01-16|2016-01-11
2228|2016-01-10|2016-01-16|2016-01-14
4697|2016-01-10|2016-01-16|2016-01-14
Excepted Output
Partition1: WEEK_END=2016-01-02
ID|MAX(LAST_APPEARING_DATE)
7152|2015-12-29
8350|2015-12-27
4697|2015-12-30
7187|2015-01-01
8005|2015-12-29
Partition1: WEEK_END=2016-01-09
ID|MAX(LAST_APPEARING_DATE)
7152|2016-01-07
8350|2016-01-09
4697|2015-12-30
7187|2015-01-01
8005|2015-12-29
6254|2016-01-03
7962|2016-01-05
3339|2016-01-06
7834|2016-01-03
Partition3: WEEK_END=2016-01-10
ID|MAX(LAST_APPEARING_DATE)
7152|2016-01-07
8350|2016-01-09
4697|2016-01-14
7187|2015-01-01
8005|2015-12-29
6254|2016-01-03
7962|2016-01-05
3339|2016-01-06
7834|2016-01-03
2403|2016-01-10
0157|2016-01-11
2228|2016-01-14
Please use below query,
select ID, MAX(APPEARING_DATE) from table_name
group by ID, WEEK_END;
Or, including WEEK)END,
select ID, WEEK_END, MAX(APPEARING_DATE) from table_name
group by ID, WEEK_END;
You can use aggregation:
select t.*, max(week_end)
from t
group by id
having max(week_end) < '2016-01-02';
Adjust the date in the having clause for the week end that you want.
Actually, your question is a bit unclear. I'm not sure if a later week end would keep the row or not. If you want "as of" data, then include a where clause:
select t.id, max(week_end)
from t
where week_end < '2016-01-02'
group by id
having max(week_end) < '2016-01-02';
If you want this for a range of dates, then you can use a derived table:
select we.the_week_end, t.id, max(week_end)
from (select '2016-01-02' as the_week_end union all
select '2016-01-09' as the_week_end
) we cross join
t
where t.week_end < we.the_week_end
group by id, we.the_week_end
having max(t.week_end) < we.the_week_end;

Need to count unique transactions by month but ignore records that occur 3 days after 1st entry for that ID

I have a table with just two columns: User_ID and fail_date. Each time somebody's card is rejected they are logged in the table, their card is automatically tried again 3 days later, and if they fail again, another entry is added to the table. I am trying to write a query that counts unique failures by month so I only want to count the first entry, not the 3 day retries, if they exist. My data set looks like this
user_id fail_date
222 01/01
222 01/04
555 02/15
777 03/31
777 04/02
222 10/11
so my desired output would be something like this:
month unique_fails
jan 1
feb 1
march 1
april 0
oct 1
I'll be running this in Vertica, but I'm not so much looking for perfect syntax in replies. Just help around how to approach this problem as I can't really think of a way to make it work. Thanks!
You could use lag() to get the previous timestamp per user. If the current and the previous timestamp are less than or exactly three days apart, it's a follow up. Mark the row as such. Then you can filter to exclude the follow ups.
It might look something like:
SELECT month,
count(*) unique_fails
FROM (SELECT month(fail_date) month,
CASE
WHEN datediff(day,
lag(fail_date) OVER (PARTITION BY user_id,
ORDER BY fail_date),
fail_date) <= 3 THEN
1
ELSE
0
END follow_up
FROM elbat) x
WHERE follow_up = 0
GROUP BY month;
I'm not so sure about the exact syntax in Vertica, so it might need some adaptions. I also don't know, if fail_date actually is some date/time type variant or just a string. If it's just a string the date/time specific functions may not work on it and have to be replaced or the string has to be converted prior passing it to the functions.
If the data spans several years you might also want to include the year additionally to the month to keep months from different years apart. In the inner SELECT add a column year(fail_date) year and add year to the list of columns and the GROUP BY of the outer SELECT.
You can add a flag about whether this is a "unique_fail" by doing:
select t.*,
(case when lag(fail_date) over (partition by user_id order by fail_date) > fail_date - 3
then 0 else 1
end) as first_failure_flag
from t;
Then, you want to count this flag by month:
select to_char(fail_date, 'Mon'), -- should aways include the year
sum(first_failure_flag)
from (select t.*,
(case when lag(fail_date) over (partition by user_id order by fail_date) > fail_date - 3
then 0 else 1
end) as first_failure_flag
from t
) t
group by to_char(fail_date, 'Mon')
order by min(fail_date)
In a Derived Table, determine the previous fail_date (prev_fail_date), for a specific user_id and fail_date, using a Correlated subquery.
Using the derived table dt, Count the failure, if the difference of number of days between current fail_date and prev_fail_date is greater than 3.
DateDiff() function alongside with If() function is used to determine the cases, which are not repeated tries.
To Group By this result on Month, you can use MONTH function.
But then, the data can be from multiple years, so you need to separate them out yearwise as well, so you can do a multi-level group by, using YEAR function as well.
Try the following (in MySQL) - you can get idea for other RDBMS as well:
SELECT YEAR(dt.fail_date) AS year_fail_date,
MONTH(dt.fail_date) AS month_fail_date,
COUNT( IF(DATEDIFF(dt.fail_date, dt.prev_fail_date) > 3, user_id, NULL) ) AS unique_fails
FROM (
SELECT
t1.user_id,
t1.fail_date,
(
SELECT t2.fail_date
FROM your_table AS t2
WHERE t2.user_id = t1.user_id
AND t2.fail_date < t1.fail_date
ORDER BY t2.fail_date DESC
LIMIT 1
) AS prev_fail_date
FROM your_table AS t1
) AS dt
GROUP BY
year_fail_date,
month_fail_date
ORDER BY
year_fail_date ASC,
month_fail_date ASC

Find Distinct IDs when the due date is always on the last day of each month

I have to find distinct IDs throughout the whole history of each ID whose due dates are always on the last day of each month.
Suppose I have the following dataset:
ID DUE_DT
1 1/31/2014
1 2/28/2014
1 3/31/2014
1 6/30/2014
2 1/30/2014
2 2/28/2014
3 1/29/2016
3 2/29/2016
I want to write a code in SQL so that it gives me ID = 1 as for this specific ID the due date is always on the last day of each given month.
What would be the easiest way to approach it?
You can do:
select id
from t
group by id
having sum(case when extract(day from due_dt + interval '1 day') = 1 then 1 else 0 end) = count(*);
This uses ANSI/ISO standard functions for date arithmetic. These tend to vary by database, but the idea is the same in all databases -- add one day and see if the day of the month is 1 for all the rows.
If your using SQL Server 2012+ you can use the EOMONTH() function to achieve this:
SELECT DISTINCT ID FROM [table]
WHERE DUE_DT = EOMONTH(DUE_DT)
http://rextester.com/VSPQR78701
The idea is quite simple:
you are on the last day of the month if (the month of due date) is not the same as (the month of due date + 1 day). This covers all cases across year, leap year and so on.
from there on, if (the count of rows for one id) is the same as (the count of rows for this id which are the last day of the month) you have a winner.
I tried to write an example (not tested). You do not specify which DB so I will assume that cte (common table expression) are available. If not just put the cte as subquery.
In the same way, I am not sure that dateadd and interval work the same in all dialect.
with addlastdayofmonth as (
select
id
-- adding a 'virtualcolumn', 1 if last day of month 0 otherwise
, if(month(dateadd(due_date, interval '1' day)) != month(due_date), 1 ,0) as onlastday
from
table
)
select
id
, count(*) - sum(onlastday) as alwayslastday
from
addlastdayofmonth
group by
id
having
-- if count(rows) == count(rows with last day) we have a winner
halwayslastday = 0
MySQL-Version (credits to #Gordon Linoff)
SELECT
ID
FROM
<table>
GROUP BY
ID
HAVING
SUM(IF(day(DUE_DT + interval 1 Day) = 1, 1, 0)) = COUNT(ID);
Original Answer:
SELECT MAX(DUE_DT) FROM <table> WHERE ID = <the desired ID>
or if you want all MAX(DUE_DT) for each unique ID
SELECT ID, MAX(DATE) FROM <table> GROUP BY ID