SQL: How to select a max record each day? - sql

I found a lot of similar questions but no one fits perfectly for my case and I am struggling for hours for a solution. My table is composed by the fields DAY, HOUR, EVENT1, EVENT2, EVENT3. Therefore I have 24 rows each day. EVENT1, EVENT2, EVENT3 have some values and I'd like to select each day only the row (I mean the record) for which EVENT3 has the maximum value in the day (among the 24 hours). The final outcome will be one row per day

One method uses a correlated subquery:
select t.*
from t
where t.event3 = (select max(t2.event3)
from t t2
where t2.date = t.date
);
In most databases, this has very good performance with an index on (date, event3).
A more canonical solution uses row_number():
select t.*
from (select t.*,
row_number() over (partition by date order by event3 desc) as seqnum
from t
) t
where seqnum = 1;

Another option aside from using correlated subqueries is to write this is a left self-join, something like this:
SELECT t.*
FROM t
LEFT JOIN t AS t2 ON t.day = t2.day AND t2.event3 > t.event3
WHERE t2.id IS NULL
If you want to select an arbitrary matching row each day in the event of multiple rows with the same maximum event3, tack GROUP BY t.day on the end of that.
I'm not sure how performance of this is going to compare to Gordon Linoff's solutions, but they might get assembled into quite similar query plans by the RDBMS anyway.

Related

SQL Server: loop once a month value to the whole month

I have a table that gets one value of only one day in each month. I want to duplicate that value to the whole month until a new value shows up. the result will be a table with data for each day of the month based on the last known value.
Can someone help me writing this query?
This is untested, due to a lack of consumable sample data, but this looks like a gaps and island problem. Here you can count the number of non-NULL values for Yield to assign the group "number" and then get the windowed MAX in the outer SELECT:
WITH CTE AS(
SELECT Yield,
[Date],
COUNT(yield) OVER (ORDER BY [Date]) AS Grp
FROM dbo.YourTable)
SELECT MAX(yield) OVER (PARTITION BY grp) AS yield
[Date],
DATENAME(WEEKDAY,[Date]) AS [Day]
FROM CTE;
You seem to have data on the first of the month. That suggests an alternative approach:
select t.*, t2.yield as imputed_yield
from t cross apply
(select t2.*
from t t2
where t2.date = datefromparts(year(t.date), month(t.date), 1)
) t2;
This should be able to take advantage of an index on (date, yield). And it does assume that the value you want is on the first date of the month.

How to use analytic functions to find the next-most-recent timestamp in the same table

I am currently using a self-join to calculate the next-most-recent timestamp for any given row:
SELECT t.COLUMN1,
t.SOME_OTHER_COLUMN,
t.TIMESTAMP_COLUMN,
MAX(pt.TIMESTAMP_COLUMN) AS PREV_TIMESTAMP_COLUMN
FROM Table1 t
LEFT JOIN Table1 pt ON pt.COLUMN1 = t.COLUMN1
AND pt.TIMESTAMP_COLUMN < t.TIMESTAMP_COLUMN
AND pt.SOME_OTHER_COLUMN = SOME_LITERAL_VALUE
GROUP BY t.COLUMN1,
t.SOME_OTHER_COLUMN,
t.TIMESTAMP_COLUMN
The problem is, I need to do this multiple times, for multiple comparisons, which will require multiple nested self-joins, which will be very ugly code, and probably very slow to execute.
How do you accomplish this same thing, but using analytic functions instead?
I started writing some code, but it looks wrong:
SELECT DISTINCT t.COLUMN1,
t.SOME_OTHER_COLUMN,
t.TIMESTAMP_COLUMN,
MAX(CASE WHEN t.TIMESTAMP_COLUMN < t.TIMESTAMP_COLUMN
AND t.SOME_OTHER_COLUMN = SOME_LITERAL_VALUE
THEN t.TIMESTAMP END) OVER
(PARTITION BY t.COLUMN1) AS PREV_TIMESTAMP_COLUMN1,
MAX(CASE WHEN t.TIMESTAMP_COLUMN < t.TIMESTAMP_COLUMN
AND t.SOME_OTHER_COLUMN = SOME_OTHER_LITERAL_VALUE
THEN t.TIMESTAMP END) OVER
(PARTITION BY t.COLUMN1) AS PREV_TIMESTAMP_COLUMN2
FROM Table1 t
As soon as I saw WHEN t.TIMESTAMP_COLUMN < t.TIMESTAMP_COLUMN I thought "This can't be right ..."
I know there are many other ways of using analytic functions, such as ROWS UNBOUNDED PRECEDING, but I'm new to analytic functions, and I don't know how to implement those.
What's the best way to use analytic functions to accomplish this?
I think that you could do a conditional window max with a frame specification, as follows:
SELECT DISTINCT
COLUMN1,
SOME_OTHER_COLUMN,
TIMESTAMP_COLUMN,
MAX(CASE WHEN SOME_OTHER_COLUMN = 'SOME_LITERAL_VALUE' THEN TIMESTAMP_COLUMN END)
OVER(
PARTITION BY COLUMN1
ORDER BY TIMESTAMP_COLUMN
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
) PREV_TIMESTAMP_COLUMN
FROM Table1 t
This will get you the greatest timestamp within previous records having the same COLUMN1, and whose SOME_OTHER_COLUMN is equal to the desired litteral value.

Get minimum without using row number/window function in Bigquery

I have a table like as shown below
What I would like to do is get the minimum of each subject. Though I am able to do this with row_number function, I would like to do this with groupby and min() approach. But it doesn't work.
row_number approach - works fine
SELECT * FROM (select subject_id,value,id,min_time,max_time,time_1,
row_number() OVER (PARTITION BY subject_id ORDER BY value) AS rank
from table A) WHERE RANK = 1
min() approach - doesn't work
select subject_id,id,min_time,max_time,time_1,min(value) from table A
GROUP BY SUBJECT_ID,id
As you can see just the two columns (subject_id and id) is enough to group the items together. They will help differentiate the group. But why am I not able to use the other columns in select clause. If I use the other columns, I may not get the expected output because time_1 has different values.
I expect my output to be like as shown below
In BigQuery you can use aggregation for this:
SELECT ARRAY_AGG(a ORDER BY value LIMIT 1)[SAFE_OFFSET(1)].*
FROM table A
GROUP BY SUBJECT_ID;
This uses ARRAY_AGG() to aggregate each record (the a in the argument list). ARRAY_AGG() allows you to order the result (by value) and to limit the size of the array. The latter is important for performance.
After you concatenate the arrays, you want the first element. The .* transforms the record referred to by a to the component columns.
I'm not sure why you don't want to use ROW_NUMBER(). If the problem is the lingering rank column, you an easily remove it:
SELECT a.* EXCEPT (rank)
FROM (SELECT a.*,
ROW_NUMBER() OVER (PARTITION BY subject_id ORDER BY value) AS rank
FROM A
) a
WHERE RANK = 1;
Are you looking for something like below-
SELECT
A.subject_id,
A.id,
A.min_time,
A.max_time,
A.time_1,
A.value
FROM table A
INNER JOIN(
SELECT subject_id, MIN(value) Value
FROM table
GROUP BY subject_id
) B ON A.subject_id = B.subject_id
AND A.Value = B.Value
If you do not required to select Time_1 column's value, this following query will work (As I can see values in column min_time and max_time is same for the same group)-
SELECT
A.subject_id,A.id,A.min_time,A.max_time,
--A.time_1,
MIN(A.value)
FROM table A
GROUP BY
A.subject_id,A.id,A.min_time,A.max_time
Finally, the best approach is if you can apply something like CAST(Time_1 AS DATE) on your time column. This will consider only the date part regardless of the time part. The query will be
SELECT
A.subject_id,A.id,A.min_time,A.max_time,
CAST(A.time_1 AS DATE) Time_1,
MIN(A.value)
FROM table A
GROUP BY
A.subject_id,A.id,A.min_time,A.max_time,
CAST(A.time_1 AS DATE)
-- Make sure the syntax of CAST AS DATE
-- in BigQuery is as I written here or bit different.
Below is for BigQuery Standard SQL and is most efficient way for such cases like in your question
#standardSQL
SELECT AS VALUE ARRAY_AGG(t ORDER BY value LIMIT 1)[OFFSET(0)]
FROM `project.dataset.table` t
GROUP BY subject_id
Using ROW_NUMBER is not efficient and in many cases lead to Resources exceeded error.
Note: self join is also very ineffective way of achieving your objective
A bit late to the party, but here is a cte-based approach which made sense to me:
with mins as (
select subject_id, id, min(value) as min_value
from table
group by subject_id, id
)
select distinct t.subject_id, t.id, t.time_1, t.min_time, t.max_time, m.min_value
from table t
join mins m on m.subject_id = t.subject_id and m.id = t.id

Trying to get the greatest value from a customer on a given day

What I need to do: if a customer makes more than one transaction in a day, I need to display the greatest value (and ignore any other values).
The query is pretty big, but the code I inserted below is the focus of the issue. I’m not getting the results I need. The subselect ideally should be reducing the number of rows the query generates since I don’t need all the transactions, just the greatest one, however my code isn’t cutting it. I’m getting the exact same number of rows with or without the subselect.
Note: I don’t actually have a t. in the actual query, there’s just a dozen or so other fields being pulled in. I added the t.* just to simplify the code example.*
SELECT
t.*,
(SELECT TOP (1)
t1.CustomerGUID
t1.Value
t1.Date
FROM #temp t1
WHERE t1.CustomerGUID = t.CustomerGUID
AND t1.Date = t.Date
ORDER BY t1.Value DESC) AS “Value”
FROM #temp t
Is there an obvious flaw in my code or is there a better way to achieve the result of getting the greatest value transaction per day per customer?
Thanks
you may want to do as follows:
SELECT
t1.CustomerGUID,
t1.Date,
MAX(t1.Value) AS Value
FROM #temp t1
GROUP BY
t1.CustomerGUID,
t1.Date
You can use row_number() as shown below.
SELECT
*
FROM
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY CustomerGUID ORDER BY Date Desc) AS SrNo FROM <YourTable>
)
<YourTable>
WHERE
SrNo = 1
Sample data will be more helpful.
Try this window function:
MAX(value) OVER(PARTITION BY date,customer ORDER BY value DESC)
Its faster and more efficient.
Probably many other ways to do it, but this one is simple and works
select t.*
from (
select
convert(varchar(8), r.date,112) one_day
,max(r.Value) max_sale
from #temp r
group by convert(varchar(8), r.date,112)
) e
inner join #temp t on t.value = e.max_sale and convert(varchar(8), t.date,112) = e.one_day
if you have 2 people who spend the exact same amount that's also max, you'll get 2 records for that day.
the convert(varchar(8), r.date,112) will perform as desired on date, datetime and datetime2 data types. If you're date is a varchar,char,nchar or nvarchar you'll want to examine the data to find out if you left(t.date,10) or left(t.date,8) it.
If i've understood your requirement correctly you have stated"greatest value transaction per day per customer". That suggests to me you don't want 1 row per customer in the output but a row per day per customer.
To achieve this you can group on the day like this
Select t.customerid, datepart(day,t.date) as Daydate,
max(t.value) as value from #temp t group by
t.customerid, datepart(day,t.date);

SQL server - count distinct over function or row_numer with rows window function

I am currently trying to get a distinct count for customers over a 90 day rolling period. I have got the amount using sum amount and over partition. However, when I do this with count distinct, SQL doesn't have functionality.
I have attempted to use row_number() with the over partition and use rows current row and 90 preceding but this also isn't available.
Would greatly appreciate any suggested work around to resolve this problem.
I have attempted to solve the problem using 2 approaches, both which have failed based on the limitations outlined above.
Approach 1
select date
,count(distinct(customer_id)) over partition () order by date rows current row and 89 preceding as cust_count_distinct
from table
Approach 2
select date
,customer_id
,row_number() over partition (customer_id) order by date rows current row and 89 preceding as rn
from table
-- was then going to filter for rn = '1' but the rows functionality not possible with ranking function windows.
The simplest method is a correlated subquery of some sort:
select d.date, c.nt
from (select distinct date from t) d cross apply
(select count(distinct customerid) as cnt
from t t2
where t2.date >= dateadd(day, -89, d.date) and
t2.date <= d.date
) c;
This is not particularly efficient (i.e. a killer) on even a medium data set. But it might serve your needs.
You can restrict the dates being returned to test to see if it works.