Consider the following tbl:
CREATE TABLE tbl (ID INTEGER, ticker TEXT, desc TEXT);
INSERT INTO tbl (ID, ticker, desc)
VALUES (1, 'GDBR30', '30YR'),
(2, 'GDBR10', '10YR'),
(3, 'GDBR5', '5YR'),
(4, 'GDBR2', '2YR');
For reference, tbl looks like this:
ID ticker desc
1 GDBR30 30YR
2 GDBR10 10YR
3 GDBR5 5YR
4 GDBR2 2YR
When issuing the following statement, the result will be ordered according to ID.
SELECT * FROM tbl
WHERE ticker in ('GDBR10', 'GDBR5', 'GDBR30')
ID ticker desc
1 GDBR30 30YR
2 GDBR10 10YR
3 GDBR5 5YR
However, I need the ordering to adhere to the order of the passed list of values. Here's what I am looking for:
ID ticker desc
2 GDBR10 10YR
3 GDBR5 5YR
1 GDBR30 30YR
You can create a CTE that returns 2 columns: the values that you search for and for each value the sort order and join it to the table.
In the ORDER BY clause use the sort order column to sort the results:
WITH cte(id, ticker) AS (VALUES (1, 'GDBR10'), (2, 'GDBR5'), (3, 'GDBR30'))
SELECT t.*
FROM tbl t INNER JOIN cte c
ON c.ticker = t.ticker
ORDER BY c.id
See the demo.
The only way to be sure of the final ordering of records is to use the ORDER BY clause.
The order the list of values is given is not relevant for final ordering.
In your case your only solution is to give to each value a 'weight' to use as sort order.
You could for example change the IN operator with a INSTR function to get both a filterable and a sortable result.
Try something like that
SELECT *, INSTR(',GDBR10,GDBR5,GDBR30,', ',' || ticker || ',') POS
FROM tbl
WHERE POS>0
ORDER BY POS;
If you don't want the position in the selected fields list you can use a subquery:
SELECT *
FROM (SELECT *, INSTR(',GDBR10,GDBR5,GDBR30,', ',' || ticker || ',') pos FROM tbl) X
WHERE POS>0
ORDER BY POS;
The problem I'm trying to solve is removing duplicates from a particular partition as referenced by a TIMESTAMP type column. My table is something like the schema below with the timestamp column partition having day-based granularity:
requestID:STRING, ts:TIMESTAMP, recordNo:INTEGER, recordData:STRING
Now I have millions and millions of these and sometimes there are duplicates like this:
'server1234', '2020-06-10', 1, apple
'server1234', '2020-06-10', 1, apple
'server1234', '2020-06-10', 2, orange
'server1234', '2020-06-10', 2, orange
The uniqueness of the records is determined by two fields: requestID and recordNo. I'd like to remove the duplicates in the partition where CAST(ts AS DATE) = '2020-06-10'. I can see the distinct records with a simple select:
SELECT DISTINCT * FROM mytable WHERE CAST(ts AS DATE) = '2020-06-10'
There must be a way to combine a delete/update/merge with the select distinct so that I can replace the partition with the de-duplicated data.
Thoughts?
The safest way to do this is to select only the data (de-duplicated) you need out into a new table, delete the data in your permanent table, then insert your de-duplicated data back into the permanent location. BigQuery does not make update/delete methods as easy as some OLTP databases.
If you would prefer a more one-shot approach, here is an example with the data you provided that does the trick.
-- SETUP
CREATE TABLE working.remove_dupes
(
requestID STRING,
ts TIMESTAMP,
recordNo INT64,
recordData STRING
)
PARTITION BY TIMESTAMP_TRUNC(ts, HOUR);
INSERT INTO working.remove_dupes(requestID, ts, recordNo, recordData)
VALUES
('server1234', '2020-06-10', 1, 'apple'),
('server1234', '2020-06-10', 1, 'apple'),
('server1234', '2020-06-10', 2, 'orange'),
('server1234', '2020-06-10', 2, 'orange');
------------------------------------------------------------------------------------
-- SELECTING ONLY ONE OF THE ENTRIES (NO DUPLICATES)
SELECT
requestID,
ts,
recordNo,
recordData
FROM (
SELECT
requestID,
ts,
recordNo,
recordData,
ROW_NUMBER() OVER (PARTITION BY requestID, recordNo ORDER BY ts) AS instance_num
FROM
working.remove_dupes
)
WHERE
instance_num = 1;
------------------------------------------------------------------------------------
-- REPLACE THE ORIGINAL TABLE, REMOVING DUPLICATES IN THE PROCESS
-- BACK UP YOUR TABLE FIRST!!!!! (MAKE A COPY)
CREATE OR REPLACE TABLE working.remove_dupes
PARTITION BY TIMESTAMP_TRUNC(ts, HOUR)
AS
(SELECT
requestID,
ts,
recordNo,
recordData
FROM (
SELECT
requestID,
ts,
recordNo,
recordData,
ROW_NUMBER() OVER (PARTITION BY requestID, recordNo ORDER BY ts) AS instance_num
FROM
working.remove_dupes
)
WHERE
instance_num = 1);
EDIT: Note that replacing the table can (in my experience) wipe out table metadata (descriptions) and possibly the table partition. I've updated the example to include a table partition setup.
Suppose I have the following record in BQ:
id name age timestamp
1 "tom" 20 2019-01-01
I then perform two "updates" on this record by using the streaming API to 'append' additional data -- https://cloud.google.com/bigquery/streaming-data-into-bigquery. This is mainly to get around the update quota that BQ enforces (and it is a high-write application we have).
I then append two edits to the table, one update that just modifies the name, and then one update that just modifies the age. Here are the three records after the updates:
id name age timestamp
1 "tom" 20 2019-01-01
1 "Tom" null 2019-02-01
1 null 21 2019-03-03
I then want to query this record to get the most "up-to-date" information. Here is how I have started:
SELECT id, **name**, **age**,max(timestamp)
FROM table
GROUP BY id
-- 1,"Tom",21,2019-03-03
How would I get the correct name and age here? Note that there could be thousands of updates to a record, so I don't want to have to write 1000 case statements, if at all possible.
For various other reasons, I usually won't have all row data at one time, I will only have the RowID + FieldName + FieldValue.
I suppose plan B here is to do a query to get the current data and then add my changes to insert the new row, but I'm hoping there's a way to do this in one go without having to do two queries.
Below is for BigQuery Standard SQL
#standardSQL
SELECT id,
ARRAY_AGG(name IGNORE NULLS ORDER BY ts DESC LIMIT 1)[OFFSET(0)] name,
ARRAY_AGG(age IGNORE NULLS ORDER BY ts DESC LIMIT 1)[OFFSET(0)] age,
MAX(ts) ts
FROM `project.dataset.table`
GROUP BY id
You can test, play with above using sample data from your question as in below example
#standardSQL
WITH `project.dataset.table` AS (
SELECT 1 id, "tom" name, 20 age, DATE '2019-01-01' ts UNION ALL
SELECT 1, "Tom", NULL, '2019-02-01' UNION ALL
SELECT 1, NULL, 21, '2019-03-03'
)
SELECT id,
ARRAY_AGG(name IGNORE NULLS ORDER BY ts DESC LIMIT 1)[OFFSET(0)] name,
ARRAY_AGG(age IGNORE NULLS ORDER BY ts DESC LIMIT 1)[OFFSET(0)] age,
MAX(ts) ts
FROM `project.dataset.table`
GROUP BY id
with result
Row id name age ts
1 1 Tom 21 2019-03-03
This is a classic case of application of analytic functions in Standard SQL.
Here is how you can achieve your results:
select id, name, age from (
select id, name, age, ts, rank() over (partition by id order by ts desc) rnk
from `yourdataset.yourtable`
)
where rnk = 1
This will sub-group your records based id and pick the one with most recent ts (indicating the record most recently added for a given id).
You have three fields ID, Date and Total. Your table contains multiple rows for the same day which is valid data however for reporting purpose you need to show only one row per day. The row with the highest ID per day should be returned the rest should be hidden from users (not returned).
To better picture the question below is sample data and sample output:
ID, Date, Total
1, 2011-12-22, 50
2, 2011-12-22, 150
The correct result is:
2, 2012-12-22, 150
The correct output is single row for 2011-12-22 date and this row was chosen because it has the highest ID (2>1)
Assuming that you have a database that supports window functions, and that the date column is indeed just date (and not datetime), then something like:
SELECT
* --TODO - Pick columns
FROM
(
SELECT ID,[Date],Total,ROW_NUMBER() OVER (PARTITION BY [Date] ORDER BY ID desc) rn
FROM [Table]
) t
WHERE
rn = 1
Should produce one row per day - and the selected row for any given day is that with the highest ID value.
SELECT *
FROM table
WHERE ID IN ( SELECT MAX(ID)
FROM table
GROUP BY Date )
This will work.
SELECT *
FROM tableName a
INNER JOIN
(
SELECT `DATE`, MAX(ID) maxID
FROM tableName
GROUP BY `DATE`
) b ON a.id = b.MaxID AND
a.`date` = b.`date`
SQLFiddle Demo
Probably
SELECT * FROM your_table ORDER BY ID DESC LIMIT 1
Select MAX(ID),Data,Total from foo
for MySQL
Another simple way is
SELECT TOP 1 * FROM YourTable ORDER BY ID DESC
And, I think this is the most simple way!
SELECT * FROM TABLE_SUM S WHERE S.ID =
(
SELECT MAX(ID) FROM TABLE_SUM
WHERE CDATE = GG.CDATE
GROUP BY CDATE
)
Table:
UserId, Value, Date.
I want to get the UserId, Value for the max(Date) for each UserId. That is, the Value for each UserId that has the latest date. Is there a way to do this simply in SQL? (Preferably Oracle)
Update: Apologies for any ambiguity: I need to get ALL the UserIds. But for each UserId, only that row where that user has the latest date.
I see many people use subqueries or else window functions to do this, but I often do this kind of query without subqueries in the following way. It uses plain, standard SQL so it should work in any brand of RDBMS.
SELECT t1.*
FROM mytable t1
LEFT OUTER JOIN mytable t2
ON (t1.UserId = t2.UserId AND t1."Date" < t2."Date")
WHERE t2.UserId IS NULL;
In other words: fetch the row from t1 where no other row exists with the same UserId and a greater Date.
(I put the identifier "Date" in delimiters because it's an SQL reserved word.)
In case if t1."Date" = t2."Date", doubling appears. Usually tables has auto_inc(seq) key, e.g. id.
To avoid doubling can be used follows:
SELECT t1.*
FROM mytable t1
LEFT OUTER JOIN mytable t2
ON t1.UserId = t2.UserId AND ((t1."Date" < t2."Date")
OR (t1."Date" = t2."Date" AND t1.id < t2.id))
WHERE t2.UserId IS NULL;
Re comment from #Farhan:
Here's a more detailed explanation:
An outer join attempts to join t1 with t2. By default, all results of t1 are returned, and if there is a match in t2, it is also returned. If there is no match in t2 for a given row of t1, then the query still returns the row of t1, and uses NULL as a placeholder for all of t2's columns. That's just how outer joins work in general.
The trick in this query is to design the join's matching condition such that t2 must match the same userid, and a greater date. The idea being if a row exists in t2 that has a greater date, then the row in t1 it's compared against can't be the greatest date for that userid. But if there is no match -- i.e. if no row exists in t2 with a greater date than the row in t1 -- we know that the row in t1 was the row with the greatest date for the given userid.
In those cases (when there's no match), the columns of t2 will be NULL -- even the columns specified in the join condition. So that's why we use WHERE t2.UserId IS NULL, because we're searching for the cases where no row was found with a greater date for the given userid.
This will retrieve all rows for which the my_date column value is equal to the maximum value of my_date for that userid. This may retrieve multiple rows for the userid where the maximum date is on multiple rows.
select userid,
my_date,
...
from
(
select userid,
my_date,
...
max(my_date) over (partition by userid) max_my_date
from users
)
where my_date = max_my_date
"Analytic functions rock"
Edit: With regard to the first comment ...
"using analytic queries and a self-join defeats the purpose of analytic queries"
There is no self-join in this code. There is instead a predicate placed on the result of the inline view that contains the analytic function -- a very different matter, and completely standard practice.
"The default window in Oracle is from the first row in the partition to the current one"
The windowing clause is only applicable in the presence of the order by clause. With no order by clause, no windowing clause is applied by default and none can be explicitly specified.
The code works.
SELECT userid, MAX(value) KEEP (DENSE_RANK FIRST ORDER BY date DESC)
FROM table
GROUP BY userid
I don't know your exact columns names, but it would be something like this:
SELECT userid, value
FROM users u1
WHERE date = (
SELECT MAX(date)
FROM users u2
WHERE u1.userid = u2.userid
)
Not being at work, I don't have Oracle to hand, but I seem to recall that Oracle allows multiple columns to be matched in an IN clause, which should at least avoid the options that use a correlated subquery, which is seldom a good idea.
Something like this, perhaps (can't remember if the column list should be parenthesised or not):
SELECT *
FROM MyTable
WHERE (User, Date) IN
( SELECT User, MAX(Date) FROM MyTable GROUP BY User)
EDIT: Just tried it for real:
SQL> create table MyTable (usr char(1), dt date);
SQL> insert into mytable values ('A','01-JAN-2009');
SQL> insert into mytable values ('B','01-JAN-2009');
SQL> insert into mytable values ('A', '31-DEC-2008');
SQL> insert into mytable values ('B', '31-DEC-2008');
SQL> select usr, dt from mytable
2 where (usr, dt) in
3 ( select usr, max(dt) from mytable group by usr)
4 /
U DT
- ---------
A 01-JAN-09
B 01-JAN-09
So it works, although some of the new-fangly stuff mentioned elsewhere may be more performant.
I know you asked for Oracle, but in SQL 2005 we now use this:
-- Single Value
;WITH ByDate
AS (
SELECT UserId, Value, ROW_NUMBER() OVER (PARTITION BY UserId ORDER BY Date DESC) RowNum
FROM UserDates
)
SELECT UserId, Value
FROM ByDate
WHERE RowNum = 1
-- Multiple values where dates match
;WITH ByDate
AS (
SELECT UserId, Value, RANK() OVER (PARTITION BY UserId ORDER BY Date DESC) Rnk
FROM UserDates
)
SELECT UserId, Value
FROM ByDate
WHERE Rnk = 1
I don't have Oracle to test it, but the most efficient solution is to use analytic queries. It should look something like this:
SELECT DISTINCT
UserId
, MaxValue
FROM (
SELECT UserId
, FIRST (Value) Over (
PARTITION BY UserId
ORDER BY Date DESC
) MaxValue
FROM SomeTable
)
I suspect that you can get rid of the outer query and put distinct on the inner, but I'm not sure. In the meantime I know this one works.
If you want to learn about analytic queries, I'd suggest reading http://www.orafaq.com/node/55 and http://www.akadia.com/services/ora_analytic_functions.html. Here is the short summary.
Under the hood analytic queries sort the whole dataset, then process it sequentially. As you process it you partition the dataset according to certain criteria, and then for each row looks at some window (defaults to the first value in the partition to the current row - that default is also the most efficient) and can compute values using a number of analytic functions (the list of which is very similar to the aggregate functions).
In this case here is what the inner query does. The whole dataset is sorted by UserId then Date DESC. Then it processes it in one pass. For each row you return the UserId and the first Date seen for that UserId (since dates are sorted DESC, that's the max date). This gives you your answer with duplicated rows. Then the outer DISTINCT squashes duplicates.
This is not a particularly spectacular example of analytic queries. For a much bigger win consider taking a table of financial receipts and calculating for each user and receipt, a running total of what they paid. Analytic queries solve that efficiently. Other solutions are less efficient. Which is why they are part of the 2003 SQL standard. (Unfortunately Postgres doesn't have them yet. Grrr...)
Wouldn't a QUALIFY clause be both simplest and best?
select userid, my_date, ...
from users
qualify rank() over (partition by userid order by my_date desc) = 1
For context, on Teradata here a decent size test of this runs in 17s with this QUALIFY version and in 23s with the 'inline view'/Aldridge solution #1.
In Oracle 12c+, you can use Top n queries along with analytic function rank to achieve this very concisely without subqueries:
select *
from your_table
order by rank() over (partition by user_id order by my_date desc)
fetch first 1 row with ties;
The above returns all the rows with max my_date per user.
If you want only one row with max date, then replace the rank with row_number:
select *
from your_table
order by row_number() over (partition by user_id order by my_date desc)
fetch first 1 row with ties;
With PostgreSQL 8.4 or later, you can use this:
select user_id, user_value_1, user_value_2
from (select user_id, user_value_1, user_value_2, row_number()
over (partition by user_id order by user_date desc)
from users) as r
where r.row_number=1
Just had to write a "live" example at work :)
This one supports multiple values for UserId on the same date.
Columns:
UserId, Value, Date
SELECT
DISTINCT UserId,
MAX(Date) OVER (PARTITION BY UserId ORDER BY Date DESC),
MAX(Values) OVER (PARTITION BY UserId ORDER BY Date DESC)
FROM
(
SELECT UserId, Date, SUM(Value) As Values
FROM <<table_name>>
GROUP BY UserId, Date
)
You can use FIRST_VALUE instead of MAX and look it up in the explain plan. I didn't have the time to play with it.
Of course, if searching through huge tables, it's probably better if you use FULL hints in your query.
I'm quite late to the party but the following hack will outperform both correlated subqueries and any analytics function but has one restriction: values must convert to strings. So it works for dates, numbers and other strings. The code does not look good but the execution profile is great.
select
userid,
to_number(substr(max(to_char(date,'yyyymmdd') || to_char(value)), 9)) as value,
max(date) as date
from
users
group by
userid
The reason why this code works so well is that it only needs to scan the table once. It does not require any indexes and most importantly it does not need to sort the table, which most analytics functions do. Indexes will help though if you need to filter the result for a single userid.
Use ROW_NUMBER() to assign a unique ranking on descending Date for each UserId, then filter to the first row for each UserId (i.e., ROW_NUMBER = 1).
SELECT UserId, Value, Date
FROM (SELECT UserId, Value, Date,
ROW_NUMBER() OVER (PARTITION BY UserId ORDER BY Date DESC) rn
FROM users) u
WHERE rn = 1;
If you're using Postgres, you can use array_agg like
SELECT userid,MAX(adate),(array_agg(value ORDER BY adate DESC))[1] as value
FROM YOURTABLE
GROUP BY userid
I'm not familiar with Oracle. This is what I came up with
SELECT
userid,
MAX(adate),
SUBSTR(
(LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)),
0,
INSTR((LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)), ',')-1
) as value
FROM YOURTABLE
GROUP BY userid
Both queries return the same results as the accepted answer. See SQLFiddles:
Accepted answer
My solution with Postgres
My solution with Oracle
I think something like this. (Forgive me for any syntax mistakes; I'm used to using HQL at this point!)
EDIT: Also misread the question! Corrected the query...
SELECT UserId, Value
FROM Users AS user
WHERE Date = (
SELECT MAX(Date)
FROM Users AS maxtest
WHERE maxtest.UserId = user.UserId
)
i thing you shuold make this variant to previous query:
SELECT UserId, Value FROM Users U1 WHERE
Date = ( SELECT MAX(Date) FROM Users where UserId = U1.UserId)
Select
UserID,
Value,
Date
From
Table,
(
Select
UserID,
Max(Date) as MDate
From
Table
Group by
UserID
) as subQuery
Where
Table.UserID = subQuery.UserID and
Table.Date = subQuery.mDate
select VALUE from TABLE1 where TIME =
(select max(TIME) from TABLE1 where DATE=
(select max(DATE) from TABLE1 where CRITERIA=CRITERIA))
(T-SQL) First get all the users and their maxdate. Join with the table to find the corresponding values for the users on the maxdates.
create table users (userid int , value int , date datetime)
insert into users values (1, 1, '20010101')
insert into users values (1, 2, '20020101')
insert into users values (2, 1, '20010101')
insert into users values (2, 3, '20030101')
select T1.userid, T1.value, T1.date
from users T1,
(select max(date) as maxdate, userid from users group by userid) T2
where T1.userid= T2.userid and T1.date = T2.maxdate
results:
userid value date
----------- ----------- --------------------------
2 3 2003-01-01 00:00:00.000
1 2 2002-01-01 00:00:00.000
The answer here is Oracle only. Here's a bit more sophisticated answer in all SQL:
Who has the best overall homework result (maximum sum of homework points)?
SELECT FIRST, LAST, SUM(POINTS) AS TOTAL
FROM STUDENTS S, RESULTS R
WHERE S.SID = R.SID AND R.CAT = 'H'
GROUP BY S.SID, FIRST, LAST
HAVING SUM(POINTS) >= ALL (SELECT SUM (POINTS)
FROM RESULTS
WHERE CAT = 'H'
GROUP BY SID)
And a more difficult example, which need some explanation, for which I don't have time atm:
Give the book (ISBN and title) that is most popular in 2008, i.e., which is borrowed most often in 2008.
SELECT X.ISBN, X.title, X.loans
FROM (SELECT Book.ISBN, Book.title, count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title) X
HAVING loans >= ALL (SELECT count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title);
Hope this helps (anyone).. :)
Regards,
Guus
Assuming Date is unique for a given UserID, here's some TSQL:
SELECT
UserTest.UserID, UserTest.Value
FROM UserTest
INNER JOIN
(
SELECT UserID, MAX(Date) MaxDate
FROM UserTest
GROUP BY UserID
) Dates
ON UserTest.UserID = Dates.UserID
AND UserTest.Date = Dates.MaxDate
Solution for MySQL which doesn't have concepts of partition KEEP, DENSE_RANK.
select userid,
my_date,
...
from
(
select #sno:= case when #pid<>userid then 0
else #sno+1
end as serialnumber,
#pid:=userid,
my_Date,
...
from users order by userid, my_date
) a
where a.serialnumber=0
Reference: http://benincampus.blogspot.com/2013/08/select-rows-which-have-maxmin-value-in.html
select userid, value, date
from thetable t1 ,
( select t2.userid, max(t2.date) date2
from thetable t2
group by t2.userid ) t3
where t3.userid t1.userid and
t3.date2 = t1.date
IMHO this works. HTH
I think this should work?
Select
T1.UserId,
(Select Top 1 T2.Value From Table T2 Where T2.UserId = T1.UserId Order By Date Desc) As 'Value'
From
Table T1
Group By
T1.UserId
Order By
T1.UserId
First try I misread the question, following the top answer, here is a complete example with correct results:
CREATE TABLE table_name (id int, the_value varchar(2), the_date datetime);
INSERT INTO table_name (id,the_value,the_date) VALUES(1 ,'a','1/1/2000');
INSERT INTO table_name (id,the_value,the_date) VALUES(1 ,'b','2/2/2002');
INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'c','1/1/2000');
INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'d','3/3/2003');
INSERT INTO table_name (id,the_value,the_date) VALUES(2 ,'e','3/3/2003');
--
select id, the_value
from table_name u1
where the_date = (select max(the_date)
from table_name u2
where u1.id = u2.id)
--
id the_value
----------- ---------
2 d
2 e
1 b
(3 row(s) affected)
This will also take care of duplicates (return one row for each user_id):
SELECT *
FROM (
SELECT u.*, FIRST_VALUE(u.rowid) OVER(PARTITION BY u.user_id ORDER BY u.date DESC) AS last_rowid
FROM users u
) u2
WHERE u2.rowid = u2.last_rowid
Just tested this and it seems to work on a logging table
select ColumnNames, max(DateColumn) from log group by ColumnNames order by 1 desc
This should be as simple as:
SELECT UserId, Value
FROM Users u
WHERE Date = (SELECT MAX(Date) FROM Users WHERE UserID = u.UserID)
If (UserID, Date) is unique, i.e. no date appears twice for the same user then:
select TheTable.UserID, TheTable.Value
from TheTable inner join (select UserID, max([Date]) MaxDate
from TheTable
group by UserID) UserMaxDate
on TheTable.UserID = UserMaxDate.UserID
TheTable.[Date] = UserMaxDate.MaxDate;
select UserId,max(Date) over (partition by UserId) value from users;