I am trying to set up a process to reconcile a table based on specific constraints. (SQL Server)
Table contains the following cols,
Start Time
End Time
Status
Hours
Note
My logic is the following
starting at row 2
if Start Time(row 2) = End time( row 1 ) and status(row2)=status(row1)
then
hours = hours(row1)+hours(2)
move to next row
Any tips would be greatly appreciated on how I should approach this problem.
Thanks
Your question is unclear, but the following should contain the elements that should help you achieve what you really want (please, edit the question like already suggested):
Setup
-- drop table DatesAndTimes
create table DatesAndTimes
(
RowNo INT NOT NULL IDENTITY(1, 1),
StartTime DATETIME2,
EndTime DATETIME2,
Status INT,
Hours INT
)
GO
insert into DatesAndTimes (StartTime, EndTime, Status, Hours) VALUES
('20170101', '20170102', 1, 5),
('20170102', '20170103', 1, 6),
('20170104', '20170105', 2, 4),
('20170105', '20170107', 2, 3),
('20170110', '20170111', 3, 2)
Test
select * from DatesAndTimes
begin tran
;with cte as (
select TOP 100 PERCENT RowNo, StartTime, EndTime, Status, Hours,
LAG(EndTime) OVER (ORDER BY RowNo) PrevEndTime,
LAG(Status) OVER (ORDER BY RowNo) PrevStatus,
LAG(Hours) OVER (ORDER BY RowNo) PrevHours
from DatesAndTimes
order by RowNo
)
update Dest
SET Dest.Hours = (
CASE WHEN C.StartTime = C.PrevEndTime AND C.Status = C.PrevStatus THEN C.Hours + C.PrevHours
ELSE C.Hours
END)
from cte AS C
join DatesAndTimes Dest ON Dest.RowNo = C.RowNo
select * from DatesAndTimes
rollback
begin tran.. rollback are placed because I do not to actually update initial data in the table. They should be dropped when really doing the update.
LAG is a SQL Server 2012+ function that allows to access values from "behind" (or after, if a negative offset is used as input).
TOP 100 PERCENT .. ORDER BY is put to ensure the order of UPDATE. Although it usually happens using the clustered index or insert order of records, it is not guaranteed. Not really it is the most clever way to order (order not allowed in CTE, looks like a hack to me).
Related
I have a table as shown in the screenshot (first two columns) and I need to create a column like the last one. I'm trying to calculate the length of each sequence of consecutive values for each id.
For this, the last column is required. I played around with
row_number() over (partition by id, value)
but did not have much success, since the circled number was (quite predictably) computed as 2 instead of 1.
Please help!
First of all, we need to have a way to defined how the rows are ordered. For example, in your sample data there is not way to be sure that 'first' row (1, 1) will be always displayed before the 'second' row (1,0).
That's why in my sample data I have added an identity column. In your real case, the details can be order by row ID, date column or something else, but you need to ensure the rows can be sorted via unique criteria.
So, the task is pretty simple:
calculate trigger switch - when value is changed
calculate groups
calculate rows
That's it. I have used common table expression and leave all columns in order to be easy for you to understand the logic. You are free to break this in separate statements and remove some of the columns.
DECLARE #DataSource TABLE
(
[RowID] INT IDENTITY(1, 1)
,[ID]INT
,[value] INT
);
INSERT INTO #DataSource ([ID], [value])
VALUES (1, 1)
,(1, 0)
,(1, 0)
,(1, 1)
,(1, 1)
,(1, 1)
--
,(2, 0)
,(2, 1)
,(2, 0)
,(2, 0);
WITH DataSourceWithSwitch AS
(
SELECT *
,IIF(LAG([value]) OVER (PARTITION BY [ID] ORDER BY [RowID]) = [value], 0, 1) AS [Switch]
FROM #DataSource
), DataSourceWithGroup AS
(
SELECT *
,SUM([Switch]) OVER (PARTITION BY [ID] ORDER BY [RowID]) AS [Group]
FROM DataSourceWithSwitch
)
SELECT *
,ROW_NUMBER() OVER (PARTITION BY [ID], [Group] ORDER BY [RowID]) AS [GroupRowID]
FROM DataSourceWithGroup
ORDER BY [RowID];
You want results that are dependent on actual data ordering in the data source. In SQL you operate on relations, sometimes on ordered set of relations rows. Your desired end result is not well-defined in terms of SQL, unless you introduce an additional column in your source table, over which your data is ordered (e.g. auto-increment or some timestamp column).
Note: this answers the original question and doesn't take into account additional timestamp column mentioned in the comment. I'm not updating my answer since there is already an accepted answer.
One way to solve it could be through a recursive CTE:
create table #tmp (i int identity,id int, value int, rn int);
insert into #tmp (id,value) VALUES
(1,1),(1,0),(1,0),(1,1),(1,1),(1,1),
(2,0),(2,1),(2,0),(2,0);
WITH numbered AS (
SELECT i,id,value, 1 seq FROM #tmp WHERE i=1 UNION ALL
SELECT a.i,a.id,a.value, CASE WHEN a.id=b.id AND a.value=b.value THEN b.seq+1 ELSE 1 END
FROM #tmp a INNER JOIN numbered b ON a.i=b.i+1
)
SELECT * FROM numbered -- OPTION (MAXRECURSION 1000)
This will return the following:
i id value seq
1 1 1 1
2 1 0 1
3 1 0 2
4 1 1 1
5 1 1 2
6 1 1 3
7 2 0 1
8 2 1 1
9 2 0 1
10 2 0 2
See my little demo here: https://rextester.com/ZZEIU93657
A prerequisite for the CTE to work is a sequenced table (e. g. a table with an identitycolumn in it) as a source. In my example I introduced the column i for this. As a starting point I need to find the first entry of the source table. In my case this was the entry with i=1.
For a longer source table you might run into a recursion-limit error as the default for MAXRECURSION is 100. In this case you should uncomment the OPTION setting behind my SELECT clause above. You can either set it to a higher value (like shown) or switch it off completely by setting it to 0.
IMHO, this is easier to do with cursor and loop.
may be there is a way to do the job with selfjoin
declare #t table (id int, val int)
insert into #t (id, val)
select 1 as id, 1 as val
union all select 1, 0
union all select 1, 0
union all select 1, 1
union all select 1, 1
union all select 1, 1
;with cte1 (id , val , num ) as
(
select id, val, row_number() over (ORDER BY (SELECT 1)) as num from #t
)
, cte2 (id, val, num, N) as
(
select id, val, num, 1 from cte1 where num = 1
union all
select t1.id, t1.val, t1.num,
case when t1.id=t2.id and t1.val=t2.val then t2.N + 1 else 1 end
from cte1 t1 inner join cte2 t2 on t1.num = t2.num + 1 where t1.num > 1
)
select * from cte2
I have this table:
ValueId bigint // (identity) item ID
ListId bigint // group ID
ValueDelta int // item value
ValueCreated datetime2 // item created
What I need is to find consecutive Values within the same Group ordered by Created, not ID. Created and ID are not guaranteed to be in the same order.
So the output should be:
ListID bigint
FirstId bigint // from this ID (first in LID with Value ordered by Date)
LastId bigint // to this ID (last in LID with Value ordered by Date)
ValueDelta int // all share this value
ValueCount // and this many occurrences (number of items between FirstId and LastId)
I can do this with Cursors but I'm sure that's not the best idea so I'm wondering if this can be done in a query.
Please, for the answer (if any), explain it a bit.
UPDATE: SQLfiddle basic data set
It does look like a gaps-and-island problem.
Here is one way to do it. It would likely work faster than your variant.
The standard idea for gaps-and-islands is to generate two sets of row numbers partitioning them in two ways. The difference between such row numbers (rn1-rn2) would remain the same within each consecutive chunk. Run the query below CTE-by-CTE and examine intermediate results to see what is going on.
WITH
CTE_RN
AS
(
SELECT
[ValueId]
,[ListId]
,[ValueDelta]
,[ValueCreated]
,ROW_NUMBER() OVER (PARTITION BY ListID ORDER BY ValueCreated) AS rn1
,ROW_NUMBER() OVER (PARTITION BY ListID, [ValueDelta] ORDER BY ValueCreated) AS rn2
FROM [Value]
)
SELECT
ListID
,MIN(ValueID) AS FirstID
,MAX(ValueID) AS LastID
,MIN(ValueCreated) AS FirstCreated
,MAX(ValueCreated) AS LastCreated
,ValueDelta
,COUNT(*) AS ValueCount
FROM CTE_RN
GROUP BY
ListID
,ValueDelta
,rn1-rn2
ORDER BY
FirstCreated
;
This query produces the same result as yours on your sample data set.
It is not quite clear whether FirstID and LastID can be MIN and MAX, or they indeed must be from the first and last rows (when ordered by ValueCreated). If you need really first and last, the query would become a bit more complicated.
In your original sample data set the "first" and "min" for the FirstID are the same. Let's change the sample data set a little to highlight this difference:
insert into [Value]
([ListId], [ValueDelta], [ValueCreated])
values
(1, 1, '2019-01-01 01:01:02'), -- 1.1
(1, 0, '2019-01-01 01:02:01'), -- 2.1
(1, 0, '2019-01-01 01:03:01'), -- 2.2
(1, 0, '2019-01-01 01:04:01'), -- 2.3
(1, -1, '2019-01-01 01:05:01'), -- 3.1
(1, -1, '2019-01-01 01:06:01'), -- 3.2
(1, 1, '2019-01-01 01:01:01'), -- 1.2
(1, 1, '2019-01-01 01:08:01'), -- 4.2
(2, 1, '2019-01-01 01:08:01') -- 5.1
;
All I did is swapped the ValueCreated between the first and seventh rows, so now the FirstID of the first group is 7 and LastID is 1. Your query returns correct result. My simple query above doesn't.
Here is the variant that produces correct result. I decided to use FIRST_VALUE and LAST_VALUE functions to get the appropriate IDs. Again, run the query CTE-by-CTE and examine intermediate results to see what is going on.
This variant produces the same result as your query even with the adjusted sample data set.
WITH
CTE_RN
AS
(
SELECT
[ValueId]
,[ListId]
,[ValueDelta]
,[ValueCreated]
,ROW_NUMBER() OVER (PARTITION BY ListID ORDER BY ValueCreated) AS rn1
,ROW_NUMBER() OVER (PARTITION BY ListID, ValueDelta ORDER BY ValueCreated) AS rn2
FROM [Value]
)
,CTE2
AS
(
SELECT
ValueId
,ListId
,ValueDelta
,ValueCreated
,rn1
,rn2
,rn1-rn2 AS Diff
,FIRST_VALUE(ValueID) OVER(
PARTITION BY ListID, ValueDelta, rn1-rn2 ORDER BY ValueCreated
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS FirstID
,LAST_VALUE(ValueID) OVER(
PARTITION BY ListID, ValueDelta, rn1-rn2 ORDER BY ValueCreated
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS LastID
FROM CTE_RN
)
SELECT
ListID
,FirstID
,LastID
,MIN(ValueCreated) AS FirstCreated
,MAX(ValueCreated) AS LastCreated
,ValueDelta
,COUNT(*) AS ValueCount
FROM CTE2
GROUP BY
ListID
,ValueDelta
,rn1-rn2
,FirstID
,LastID
ORDER BY FirstCreated;
Use a CTE that adds a Row_Number column, partitioned by GroupId and Value and ordered by Created.
Then select from the CTE, GROUP BY GroupId and Value; use COUNT(*) to get the Count, and use correlated subqueries to select the ValueId with the MIN(RowNumber) (which will always be 1, so you can just use that instead of MIN) and the MAX(RowNumber) to get FirstId and LastId.
Although, now that I've noticed you're using SQL Server 2017, you should be able to use First_Value() and Last_Value() instead of correlated subqueries.
After many iterations I think I have a working solution. I'm absolutely sure it's far from optimal but it works.
Link is here: http://sqlfiddle.com/#!18/4ee9f/3
Sample data:
create table [Value]
(
[ValueId] bigint not null identity(1,1),
[ListId] bigint not null,
[ValueDelta] int not null,
[ValueCreated] datetime2 not null,
constraint [PK_Value] primary key clustered ([ValueId])
);
insert into [Value]
([ListId], [ValueDelta], [ValueCreated])
values
(1, 1, '2019-01-01 01:01:01'), -- 1.1
(1, 0, '2019-01-01 01:02:01'), -- 2.1
(1, 0, '2019-01-01 01:03:01'), -- 2.2
(1, 0, '2019-01-01 01:04:01'), -- 2.3
(1, -1, '2019-01-01 01:05:01'), -- 3.1
(1, -1, '2019-01-01 01:06:01'), -- 3.2
(1, 1, '2019-01-01 01:01:02'), -- 1.2
(1, 1, '2019-01-01 01:08:01'), -- 4.2
(2, 1, '2019-01-01 01:08:01') -- 5.1
The Query that seems to work:
-- this is the actual order of data
select *
from [Value]
order by [ListId] asc, [ValueCreated] asc;
-- there are 4 sets here
-- set 1 GroupId=1, Id=1&7, Value=1
-- set 2 GroupId=1, Id=2-4, Value=0
-- set 3 GroupId=1, Id=5-6, Value=-1
-- set 4 GroupId=1, Id=8-8, Value=1
-- set 5 GroupId=2, Id=9-9, Value=1
with [cte1] as
(
select [v1].[ListId]
,[v2].[ValueId] as [FirstId], [v2].[ValueCreated] as [FirstCreated]
,[v1].[ValueId] as [LastId], [v1].[ValueCreated] as [LastCreated]
,isnull([v1].[ValueDelta], 0) as [ValueDelta]
from [dbo].[Value] [v1]
join [dbo].[Value] [v2] on [v2].[ListId] = [v1].[ListId]
and isnull([v2].[ValueDeltaPrev], 0) = isnull([v1].[ValueDeltaPrev], 0)
and [v2].[ValueCreated] <= [v1].[ValueCreated] and not exists (
select 1
from [dbo].[Value] [v3]
where 1=1
and ([v3].[ListId] = [v1].[ListId])
and ([v3].[ValueCreated] between [v2].[ValueCreated] and [v1].[ValueCreated])
and [v3].[ValueDelta] != [v1].[ValueDelta]
)
), [cte2] as
(
select [t1].*
from [cte1] [t1]
where not exists (select 1 from [cte1] [t2] where [t2].[ListId] = [t1].[ListId]
and ([t1].[FirstId] != [t2].[FirstId] or [t1].[LastId] != [t2].[LastId])
and [t1].[FirstCreated] between [t2].[FirstCreated] and [t2].[LastCreated]
and [t1].[LastCreated] between [t2].[FirstCreated] and [t2].[LastCreated]
)
)
select [ListId], [FirstId], [LastId], [FirstCreated], [LastCreated], [ValueDelta] as [ValueDelta]
,(select count(*) from [dbo].[Value] where [ListId] = [t].[ListId] and [ValueCreated] between [t].[FirstCreated] and [t].[LastCreated]) as [ValueCount]
from [cte2] [t];
How it works:
join table to self on same list but only on older (or equal date to handle single sets) values
join again on self and exclude any overlaps keeping only largest date set
once we identify largest sets, we then count entries in set dates
If anyone can find a better / friendlier solution, you get the answer.
PS: The dumb straightforward Cursor approach seems a lot faster than this. Still testing.
I have a dataset of hospitalisations ('spells') - 1 row per spell. I want to drop any spells recorded within a week after another (there could be multiple) - the rationale being is that they're likely symptomatic of the same underlying cause. Here is some play data:
create table hif_user.rzb_recurse_src (
patid integer not null,
eventdate integer not null,
type smallint not null
);
insert into hif_user.rzb_recurse_src values (1,1,1);
insert into hif_user.rzb_recurse_src values (1,3,2);
insert into hif_user.rzb_recurse_src values (1,5,2);
insert into hif_user.rzb_recurse_src values (1,9,2);
insert into hif_user.rzb_recurse_src values (1,14,2);
insert into hif_user.rzb_recurse_src values (2,1,1);
insert into hif_user.rzb_recurse_src values (2,5,1);
insert into hif_user.rzb_recurse_src values (2,19,2);
Only spells of type 2 - within a week after any other - are to be dropped. Type 1 spells are to remain.
For patient 1, dates 1 & 9 should be kept. For patient 2, all rows should remain.
The issue is with patient 1. Spell date 9 is identified for dropping as it is close to spell date 5; however, as spell date 5 is close to spell date 1 is should be dropped therefore allowing spell date 9 to live...
So, it seems a recursive problem. However, I've not used recursive programming in SQL before and I'm struggling to really picture how to do it. Can anyone help? I should add that I'm using Teradata which has more restrictions than most with recursive SQL (only UNION ALL sets allowed I believe).
It's a cursor logic, check one row after the other if it fits your rules, so recursion is the easiest (maybe the only) way to solve your problem.
To get a decent performance you need a Volatile Table to facilitate this row-by-row processing:
CREATE VOLATILE TABLE vt (patid, eventdate, exac_type, rn, startdate) AS
(
SELECT r.*
,ROW_NUMBER() -- needed to facilitate the join
OVER (PARTITION BY patid ORDER BY eventdate) AS rn
FROM hif_user.rzb_recurse_src AS r
) WITH DATA ON COMMIT PRESERVE ROWS;
WITH RECURSIVE cte (patid, eventdate, exac_type, rn, startdate) AS
(
SELECT vt.*
,eventdate AS startdate
FROM vt
WHERE rn = 1 -- start with the first row
UNION ALL
SELECT vt.*
-- check if type = 1 or more than 7 days from the last eventdate
,CASE WHEN vt.eventdate > cte.startdate + 7
OR vt.exac_type = 1
THEN vt.eventdate -- new start date
ELSE cte.startdate -- keep old date
END
FROM vt JOIN cte
ON vt.patid = cte.patid
AND vt.rn = cte.rn + 1 -- proceed to next row
)
SELECT *
FROM cte
WHERE eventdate - startdate = 0 -- only new start days
order by patid, eventdate
I think the key to solving this is getting the first date more than 7 days from the current date and then doing a recursive subquery:
with rrs as (
select rrs.*,
(select min(rrs2.eventdate)
from hif_user.rzb_recurse_src rrs2
where rrs2.patid = rrs.patid and
rrs2.eventdate > rrs.eventdate + 7
) as eventdate7
from hif_user.rzb_recurse_src rrs
),
recursive cte as (
select patid, min(eventdate) as eventdate, min(eventdate7) as eventdate7
from hif_user.rzb_recurse_src rrs
group by patid
union all
select cte.patid, cte.eventdate7, rrs.eventdate7
from cte join
hif_user.rzb_recurse_src rrs
on rrs.patid = cte.patid and
rrs.eventdate = cte.eventdate7
)
select cte.patid, cte.eventdate
from cte;
If you want additional columns, then join in the original table at the last step.
I have a log table where one of the fields is a filename. These filenames are versioned with a suffix at the end of filename. Say we made file SampleName.xml but later had to revise this -- the new version would appear in the log as SampleName_V2.xml (and this could continue increasing indefinitely, but the most I've seen is V8).
I need a way to SELECT every entry in this log, but only keep the entry with the latest version number on the filename.
I feel like there's got to be an easy answer to this, but I've been trying to think of it all day and can't come to it.
Anyone have any ideas?
EDIT: We do have a DateTime field in every row as well, if that helps.
Here is something that will do the job for you. Idea is to use temp table that also holds file names without _v suffix.
I’ve probably made this more complex than needed but you’ll be able to see the point
DROP TABLE #TmpResults
CREATE TABLE #TmpResults
(
Original nvarchar(100),
WO_Version nvarchar(100),
Last_Update datetime
)
INSERT INTO #TmpResults
(Original, WO_Version, Last_Update)
VALUES
('file1.xml', 'file1.xml', '01/01/2013'),
('file2.xml', 'file2.xml', '02/01/2013'),
('file2_v2.xml', 'file2.xml', '03/01/2013'),
('file3.xml', 'file3.xml', '01/01/2013'),
('file3_v2.xml', 'file3.xml', '01/02/2013'),
('file3_v3.xml', 'file3.xml', '01/03/2013'),
('file4.xml', 'file4.xml', '05/01/2013'),
('file5.xml', 'file5.xml', '06/01/2013'),
('file5_v2.xml', 'file5.xml', '06/02/2013'),
('file5_v3.xml', 'file5.xml', '06/03/2013'),
('file5_v4.xml', 'file5.xml', '06/04/2013')
SELECT
P.WO_Version,
(SELECT MAX(Last_Update) FROM #TmpResults T WHERE T.WO_Version =
P.WO_Version) as Last_Update,
(SELECT TOP 1 Original
FROM #TmpResults T
WHERE T.Last_Update =
( SELECT MAX(Last_Update)
FROM #TmpResults Tm
WHERE Tm.WO_Version = P.WO_Version) ) as Last_FileVersion
FROM
(
SELECT DISTINCT WO_Version
FROM #TmpResults
GROUP BY WO_Version
) P
Here is the SELECT query you can use to fill the temp table with SELECT INTO
SELECT
Original_File_Name,
REPLACE(#Original_File_Name,
SUBSTRING(#Original_File_Name, LEN(#Original_File_Name) - CHARINDEX('v_',REVERSE(#Original_File_Name), 1), LEN(#Original_File_Name) - CHARINDEX('v_',REVERSE(#Original_File_Name), 1)),
SUBSTRING(#Original_File_Name, LEN(#Original_File_Name) - CHARINDEX('.',REVERSE(#Original_File_Name), 1) +1 , LEN(#Original_File_Name) - CHARINDEX('.',REVERSE(#Original_File_Name), 1))) as WO_Version,
Last_Update
FROM OriginalDataTable
I think this will gives you the result
SELECT TOP(1) filename
FROM table
ORDER BY datetime_field DESC
If you are sure that your version number is in the order of _V1 to _V8 this will help you
SELECT TOP(1) filename
FROM table
ORDER BY CAST(RIGHT(SUBSTRING([Filename],1,LEN(SUBSTRING([Filename], 0,
PATINDEX('%.%',[Filename])) + '.') - 1),1) AS INT)
UPDATED
I am suggesting another method which gives you all the file name with latest version.
;WITH cte AS
(
SELECT
ROW_NUMBER() OVER (PARTITION BY LEFT(#products,LEN(#products)-CHARINDEX('_',#products))
ORDER BY date_field DESC
/*OR order by CAST(RIGHT(SUBSTRING([Filename],1,LEN(SUBSTRING([Filename], 0,
PATINDEX('%.%',[Filename])) + '.') - 1),1) AS INT) ASC*/
) AS rno,
filename
FROM table
)
SELECT * FROM cte WHERE rno=1
I have a table of items which change status every few weeks. I want to look at an arbitrary day and figure out how many items were in each status.
For example:
tbl_ItemHistory
ItemID
StatusChangeDate
StatusID
Sample data:
1001, 1/1/2010, 1
1001, 4/5/2010, 2
1001, 6/15/2010, 4
1002, 4/1/2010, 1
1002, 6/1/2010, 3
...
So I need to figure out how many items were in each status for a given day. So on 5/1/2010, there was one item (1001) in status 2 and one item in status 1 (1002).
I want to create a cached table every night that has a row for every item and every day of the year so I can show status changes over time in a chart. I don't know much about SQL. I was thinking about using a for loop, but based on some of the creative answers I've seen on the forum, I doubt that's the right way.
I'm using SQL Server 2008R2
I looked around and I think this is similar to this question: https://stackoverflow.com/questions/11183164/show-data-change-over-time-in-a-chart but that one wasn't answered. Is there a way to do these things?
A coworker showed me a cool way to do it so I thought I would contribute it to the community:
declare #test table (ItemID int, StatusChangeDate datetime, StatusId tinyint);
insert #test values
(1001, '1/1/2010', 1),
(1001, '4/5/2010', 2),
(1001, '6/15/2010', 4),
(1002, '4/2/2010', 1),
(1002, '6/1/2010', 3);
with
itzik1(N) as (
select 1 union all select 1 union all
select 1 union all select 1), --4
itzik2(N) as (select 1 from itzik1 i cross join itzik1), --16
itzik3(N) as (select 1 from itzik2 i cross join itzik2), --256
itzik4(N) as (select 1 from itzik3 i cross join itzik3), --65536 (184 years)
tally(N) as (select row_number() over (order by (select null)) from itzik4)
select ItemID, StatusChangeDate, StatusId from(
select
test.ItemID,
dates.StatusChangeDate,
test.StatusId,
row_number() over (
partition by test.ItemId, dates.StatusChangeDate
order by test.StatusChangeDate desc) as rnbr
from #test test
join (
select dateadd(dd, N,
(select min(StatusChangeDate) from #test) --First possible date
) as StatusChangeDate
from tally) dates
on test.StatusChangeDate <= dates.StatusChangeDate
and dates.StatusChangeDate <= getdate()
) result
where rnbr = 1