Given the following table containing the example rows, I’m looking for a query to give me the aggregate results of changes made to the same record. All changes are made against a base record in another table (results table), so the contents of the results table are not cumulative.
Base Records (from which all changes are made)
Edited Columns highlighted
I’m looking for a query that would give me the cumulative changes (in order by date). This would be the resulting rows:
Any help appreciated!
UPDATE---------------
Let me offer some clarification. The records being edited exist in one table, let's call that [dbo].[Base]. When a person updates a record from [dbo].[Base], his updates go into [dbo].[Updates]. Therefore, a person is always editing from the base table.
At some point, let's say once a day, we need to calculate the sum of changes with the following rule:
For any given record, determine the latest change for each column and take the latest change. If no change was made to a column, take the value from [dbo].[Base]. So, one way of looking at the [dbo].[Updates] table would be to see only the changed columns.
Please let's not discuss the merits of this approach, I realize it's strange. I just need to figure out how to determine the final state of each record.
Thanks!
This is dirty, but you can give this a shot (test here: https://rextester.com/MKSBU15593)
I use a CTE to do an initial CROSS JOIN of the Base and Update tables and then a second to filter it to only the rows where the IDs match. From there I use FIRST_VALUE() for each column, partitioned by the ID value and ordered by a CASE expression (if the Base column value matches the Update column value then 1 else 0) and the Datemodified column to get the most recent version of the each column.
It spits out
CREATE TABLE Base
(
ID INT
,FNAME VARCHAR(100)
,LNAME VARCHAR(100)
,ADDRESS VARCHAR(100)
,RATING INT
,[TYPE] VARCHAR(5)
,SUBTYPE VARCHAR(5)
);
INSERT INTO dbo.Base
VALUES
( 100,'John','Doe','123 First',3,'Emp','W2'),
( 200,'Jane','Smith','Wacker Dr.',2,'Emp','W2');
CREATE TABLE Updates
(
ID INT
,DATEMODIFIED DATE
,FNAME VARCHAR(100)
,LNAME VARCHAR(100)
,ADDRESS VARCHAR(100)
,RATING INT
,[TYPE] VARCHAR(5)
,SUBTYPE VARCHAR(5)
);
INSERT INTO dbo.Updates
VALUES
( 100,'1/15/2019','John','Doe','123 First St.',3,'Emp','W2'),
( 200,'1/15/2019','Jane','Smyth','Wacker Dr.',2,'Emp','W2'),
( 100,'1/17/2019','Johnny','Doe','123 First',3,'Emp','W2'),
( 200,'1/19/2019','Jane','Smith','2 Wacker Dr.',2,'Emp','W2'),
( 100,'1/20/2019','Jon','Doe','123 First',3,'Cont','W2');
WITH merged AS
(
SELECT b.ID AS IDOrigin
,'1/1/1900' AS DATEMODIFIEDOrigin
,b.FNAME AS FNAMEOrigin
,b.LNAME AS LNAMEOrigin
,b.ADDRESS AS ADDRESSOrigin
,b.RATING AS RATINGOrigin
,b.[TYPE] AS TYPEOrigin
,b.SUBTYPE AS SUBTYPEOrigin
,u.*
FROM base b
CROSS JOIN
dbo.Updates u
), filtered AS
(
SELECT *
FROM merged
WHERE IDOrigin = ID
)
SELECT distinct
ID
,FNAME = FIRST_VALUE(FNAME) OVER (PARTITION BY ID ORDER BY CASE WHEN FNAME = FNAMEOrigin THEN 1 ELSE 0 end, datemodified desc)
,LNAME = FIRST_VALUE(LNAME) OVER (PARTITION BY ID ORDER BY CASE WHEN LNAME = LNAMEOrigin THEN 1 ELSE 0 end, datemodified desc)
,ADDRESS = FIRST_VALUE(ADDRESS) OVER (PARTITION BY ID ORDER BY CASE WHEN ADDRESS = ADDRESSOrigin THEN 1 ELSE 0 end, datemodified desc)
,RATING = FIRST_VALUE(RATING) OVER (PARTITION BY ID ORDER BY CASE WHEN RATING = RATINGOrigin THEN 1 ELSE 0 end, datemodified desc)
,[TYPE] = FIRST_VALUE([TYPE]) OVER (PARTITION BY ID ORDER BY CASE WHEN [TYPE] = TYPEOrigin THEN 1 ELSE 0 end, datemodified desc)
,SUBTYPE = FIRST_VALUE(SUBTYPE) OVER (PARTITION BY ID ORDER BY CASE WHEN SUBTYPE = SUBTYPEOrigin THEN 1 ELSE 0 end, datemodified desc)
FROM filtered
Don't you just want the last record?
select e.*
from edited e
where e.datemodified = (select max(e2.datemodified)
from edited e2
where e2.id = e.id
);
Related
I have a table named "ROSTER" and in this table I have 22 columns.
I want to query and compare any 2 rows of that particular table with the purpose to check if each column's values of that 2 rows are exactly the same. ID column always has different values in each row so I will not include ID column for the comparing. I will just use it to refer to what rows will be used for the comparison.
If all column values are the same: Either just display nothing (I prefer this one) or just return the 2 rows as it is.
If there are some column values not the same: Either display those column names only or display both the column name and its value (I prefer this one).
Example:
ROSTER Table:
ID
NAME
TIME
1
N1
0900
2
N1
0801
Output:
ID
TIME
1
0900
2
0801
OR
Display "TIME"
Note: Actually I'm okay with whatever result or way of output as long as I can know in any way that the 2 rows are not the same.
What are the possible ways to do this in SQL Server?
I am using Microsoft SQL Server Management Studio 18, Microsoft SQL Server 2019-15.0.2080.9
Please try the following solution based on the ideas of John Cappelletti. All credit goes to him.
SQL
-- DDL and sample data population, start
DECLARE #roster TABLE (ID INT PRIMARY KEY, NAME VARCHAR(10), TIME CHAR(4));
INSERT INTO #roster (ID, NAME, TIME) VALUES
(1,'N1','0900'),
(2,'N1','0801')
-- DDL and sample data population, end
DECLARE #source INT = 1
, #target INT = 2;
SELECT id AS source_id, #target AS target_id
,[key] AS [column]
,source_Value = MAX( CASE WHEN Src=1 THEN Value END)
,target_Value = MAX( CASE WHEN Src=2 THEN Value END)
FROM (
SELECT Src=1
,id
,B.*
FROM #roster AS A
CROSS APPLY ( SELECT [Key]
,Value
FROM OpenJson( (SELECT A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES))
) AS B
WHERE id=#source
UNION ALL
SELECT Src=2
,id = #source
,B.*
FROM #roster AS A
CROSS APPLY ( SELECT [Key]
,Value
FROM OpenJson( (SELECT A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES))
) AS B
WHERE id=#target
) AS A
GROUP BY id, [key]
HAVING MAX(CASE WHEN Src=1 THEN Value END)
<> MAX(CASE WHEN Src=2 THEN Value END)
AND [key] <> 'ID' -- exclude this PK column
ORDER BY id, [key];
Output
+-----------+-----------+--------+--------------+--------------+
| source_id | target_id | column | source_Value | target_Value |
+-----------+-----------+--------+--------------+--------------+
| 1 | 2 | TIME | 0900 | 0801 |
+-----------+-----------+--------+--------------+--------------+
A general approach here might be to just aggregate over the entire table and report the state of the counts:
SELECT
CASE WHEN COUNT(DISTINCT ID) = COUNT(*) THEN 'Yes' ELSE 'No' END AS [ID same],
CASE WHEN COUNT(DISTINCT NAME) = COUNT(*) THEN 'Yes' ELSE 'No' END AS [NAME same],
CASE WHEN COUNT(DISTINCT TIME) = COUNT(*) THEN 'Yes' ELSE 'No' END AS [TIME same]
FROM yourTable;
What I am trying to achieve is group them by id and create a column for the date as well as data.
The background of the dataset are it is lab result taken by participant and some test are not able to be taken on same day due to fasting restrictions n etc. The database I am using is SQL Server.
Below are my DataSet as well as the desired output.
Sample dataset:
create table Sample
(
Id int,
LAB_DATE date,
A_CRE_1 varchar(100),
B_GLUH_1 varchar(100),
C_LDL_1 varchar(100),
D_TG_1 varchar(100),
E_CHOL_1 varchar(100),
F_HDL_1 varchar(100),
G_CRPH_1 varchar(100),
H_HBA1C_1 varchar(100),
I_GLU120_1 varchar(100),
J_GLUF_1 varchar(100),
K_HCR_1 varchar(100)
)
insert into Sample(Id, LAB_DATE,A_CRE_1, B_GLUH_1,C_LDL_1,E_CHOL_1,F_HDL_1,H_HBA1C_1,K_HCR_1)
values (01, '2017-11-21', '74', '6.4', '2.04', '4.17', '1.64', '6.1', '2.54')
insert into sample (Id, LAB_DATE, I_GLU120_1)
values (01, '2017-11-22','8.8')
insert into sample (Id, LAB_DATE, D_TG_1)
values (01, '2017-11-23','0.56')
insert into sample (Id,LAB_DATE,A_CRE_1,B_GLUH_1,C_LDL_1,D_TG_1,E_CHOL_1,F_HDL_1,K_HCR_1)
values (2,'2018-10-02','57','8.91','2.43','1.28','3.99','1.25','3.19')
insert into sample (Id,LAB_DATE,H_HBA1C_1)
values (2,'2018-10-03','8.6')
insert into sample (Id,LAB_DATE,J_GLUF_1)
values (2,'2018-10-04','7.8')
insert into sample (Id,LAB_DATE,A_CRE_1,B_GLUH_1,C_LDL_1,D_TG_1,E_CHOL_1,F_HDL_1,G_CRPH_1,H_HBA1C_1,K_HCR_1)
values (3,'2016-10-01','100','6.13','3.28','0.94','5.07','1.19','0.27','5.8','4.26')
Desired output:
ID|LAB_DATE|A_CRE_1|B_GLUH_1|C_LDL_1|Date_TG_1|D_TG_1|E_CHOL_1|F_HDL_1|G_CRPH_1|H_HBA1C_1|Date_GLU120_1|I_GLU120_1|J_GLUF_1|K_HCR_1
1|2017-11-21|74|6.4|2.04|2017-11-23|0.56|4.17|1.64|||6.1|2017-11-22|8.8|||2.54
2|02/10/2018|57|8.91|2.43||1.28|3.99|1.25||03/10/2018|8.6|||04/10/2018|7.8|3.19
3|01/10/2016|100|6.13|3.28||0.94|5.07|1.19|0.27||5.8|||||4.26
Here's a solution (that cannot cope with multiple rows of the same id/sample type - you haven't said what to do with those)
select * from
(select Id, LAB_DATE,A_CRE_1, B_GLUH_1,C_LDL_1,E_CHOL_1,F_HDL_1,H_HBA1C_1,K_HCR_1 from sample) s1
INNER JOIN
(select Id, LAB_DATE as glu120date, I_GLU120_1 from sample) s2
ON s1.id = s2.id
(select Id, LAB_DATE as dtgdate, D_TG_1 from sample) s3
ON s1.id = s3.id
Hopefully you get the idea with this pattern; if you have other sample types with their own dates, break them out of s1 and into their own subquery in a similar way (eg make an s4 for e_chol_1, s5 for k_hcr_1 etc). Note that if any sample type is missing it will cause the whole row to disappear from the results. If this is not desired and you accept NULL for missing samples, use LEFT JOIN instead of INNER
If there will be multiple samples for patient 01 and you only want the latest, the pattern becomes:
select * from
(select Id, LAB_DATE,A_CRE_1, B_GLUH_1,C_LDL_1,E_CHOL_1,F_HDL_1,H_HBA1C_1,K_HCR_1,
row_number() over(partition by id order by lab_date desc) rn
from sample) s1
INNER JOIN
(select Id, LAB_DATE as glu120date, I_GLU120_1,
row_number() over(partition by id order by lab_date desc) rn
from sample) s2
ON s1.id = s2.id and s1.rn = s2.rn
WHERE
s1.rn = 1
Note the addition of row_number() over(partition by id order by lab_date desc) rn - this establishes an incrementing counter in descending date order(latest record = 1, older = 2 ...) that restarts from 1 for every different id. We join on it too then say where rn = 1 to pick only the latest records for each sample type
As #Ben suggested, you can use group by id and take min for all column like below one.
DECLARE #Sample as table (
Id int,
LAB_DATE date,
A_CRE_1 varchar(100),
B_GLUH_1 varchar(100),
C_LDL_1 varchar(100),
D_TG_1 varchar(100),
E_CHOL_1 varchar(100),
F_HDL_1 varchar(100),
G_CRPH_1 varchar(100),
H_HBA1C_1 varchar(100),
I_GLU120_1 varchar(100),
J_GLUF_1 varchar(100),
K_HCR_1 varchar(100))
insert into #Sample(Id, LAB_DATE,A_CRE_1,
B_GLUH_1,C_LDL_1,E_CHOL_1,F_HDL_1,H_HBA1C_1,K_HCR_1)
values (01,'2017-11-21','74','6.4','2.04','4.17','1.64','6.1','2.54')
insert into #Sample (Id, LAB_DATE, I_GLU120_1)
values (01, '2017-11-22','8.8')
insert into #Sample (Id, LAB_DATE, D_TG_1)
values (01, '2017-11-23','0.56')
SELECT s.Id
, MIN(s.LAB_DATE) AS LAB_DATE
, MIN(s.A_CRE_1) AS A_CRE_1
, MIN(s.B_GLUH_1) AS B_GLUH_1
, MIN(s.C_LDL_1) AS C_LDL_1
, MIN(s.D_TG_1) AS D_TG_1
, MIN(s.E_CHOL_1) AS E_CHOL_1
, MIN(s.F_HDL_1) AS F_HDL_1
, MIN(s.G_CRPH_1) AS G_CRPH_1
, MIN(s.H_HBA1C_1) AS H_HBA1C_1
, MIN(s.I_GLU120_1) AS I_GLU120_1
, MIN(s.J_GLUF_1) AS J_GLUF_1
, MIN(s.K_HCR_1) AS K_HCR_1
FROM #Sample AS s
GROUP BY s.Id
You can also check the SQL Server STUFF function. Can take help from the below link
https://www.mssqltips.com/sqlservertip/2914/rolling-up-multiple-rows-into-a-single-row-and-column-for-sql-server-data/
Following on from my comments about presenting the original data, here's what I think you should do (taking the query you commented)
SELECT
ID,
MAX(CASE WHEN TestID='1' THEN Results END) [Test_1],
MAX(CASE WHEN TestID='2' THEN Results END) [Test_2],
MAX(CASE WHEN TestID='1' THEN Result_Date_Time END) Test12Date,
MAX(CASE WHEN TestID='3' THEN Results END) [Test_3],
MAX(CASE WHEN TestID='3' THEN Result_Date_Time END) Test3Date
FROM [tbBloodSample]
GROUP BY ID
ORDER BY ID
Notes: If TestID is an int, don't use strings like '1' in your query, use ints. You don't need an ELSE NULL in a case- null is the default if the when didn't work out
Here is a query pattern. Test1 and 2 are always done on the same day, hence why I only pivot their date once. Test 3 might be done later, might be same, this means the dates in test12date and test3date might be same, might be different
Convert the strings to dates after you do the pivot, to reduce the number of conversions
Need to return a temp table in SQL joining another temp table using DISTINCT and ORDER BY clause.
I have a declared a table which returns a few things.
Declare #GrpItems TABLE (ID INT,
Name NVARCHAR(32),
Date DATETIME,
City NVARCHAR(32),
CityCode NVARCHAR(8),
CurrencySort NVARCHAR(16)
)
INSERT INTO #GrpItems
SELECT
ID, Name, Date ,
CityCodeorCaption --this can be two type based on User input CityCode or CityCaption
FROM
RepeatItemTable
Now I have a different table where I want to insert and the procedure returns that table as the final result.
DECLARE #CurrencyTable TABLE (RowNumber INT Identity (1,1),
FK_Currency INT,
Value INT,
CityCode NVARCHAR(16),
CityCaption NVARCHAR(16)
)
INSERT INTO #Currency
SELECT DISTINCT
gb.FK_Currency, cv.Value,
c.CityCode, c.CityCaption
FROM
Balance b
JOIN
Currency c ON c.PK_Currency = b.FK_Currency
JOIN
#GrpItems gi ON c.FK_Grpitem = gi.PK_Grpitem
ORDER BY
gi.CityCodeorName
I know somewhere I need group by but I am not sure or a select clause in where filter
I think
ORDER BY
gi.CityCodeOrNAME
WHEN 'City' THEN City
ELSE CityCode ASC
END
Which does not seem to work? I need the Distinct because it might break some other logic.
Select * from #CurrencyTable
You can always use group by instead of select distinct. That will solve your problem:
SELECT gb.FK_Currency, cv.Value, c.CityCode, c.CityCaption
FROM Balance b JOIN
Currency c
ON c.PK_Currency = b.FK_Currency JOIN
#GrpItems gi
ON c.FK_Grpitem = gi.PK_Grpitem
GROUP BY gb.FK_Currency, cv.Value, c.CityCode, c.CityCaption
ORDER BY MAX(gi.CityCodeorName) ;
Note the use of the aggregation function in the ORDER BY.
ORDER BY CASE WHEN CityCodeOrNAME = 'City'
THEN City
ELSE CityCode
END
In case you need differnt orders you can also separate them
ORDER BY CASE WHEN CityCodeOrNAME = 'City' THEN City END DESC,
CASE WHEN CityCodeOrNAME <> 'City' THEN CityCode END ASC
id | name
-------+-------------------------------------
209096 | Pharmacy
204200 | Eyecare Center
185718 | Duffy PC
214519 | Shopko
162225 | Edward Jones
7609 | Back In Action Chiropractic Center
I use select id, name from customer order by random()
There are 6 records i just want that when ever i query, i will get a unique row each time for six times and then it starts again from first or the records are ordered each time that the top one did't repeat
This will give you 6 random rows each time. The Group By is to ensure unique rows if your id is not a unique primary key, so maybe not needed - depending on your table structure.
SELECT TOP 6 id, name, ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS [RandomNumber]
FROM customer
GROUP BY id,name
ORDER BY [RandomNumber]
Edit: Sorry! Didn't read the question properly. Although you can use this to get a random row each time :)
If you want to specifically get all 6 rows in a random order 1 at a time you will need to store the order somewhere. Suggest creating a temp table and selecting from there, or if you are using a front end webpage get all 6 rows and store in a dataset.
You can use that logic,
"milisecond" part of current date is always changing. We have a id column as numeric. So we can use modular function to get randomized order:
create table #data(id numeric(10), name varchar(20))
insert #data select 209096 , 'Pharmacy'
insert #data select 204200 , 'Eyecare Center'
insert #data select 185718 , 'Duffy PC'
insert #data select 214519 , 'Shopko'
insert #data select 162225 , 'Edward Jones'
insert #data select 7609 , 'Back In Action Chiropractic Center'
select * from #data order by id % (datepart(ms, getdate()))
OK Maybe there is another way to do it just in SQL. Add a new BIT column "selected". Definitely not the fastest/best performance way to do it.
DECLARE #id INT
IF NOT EXISTS (SELECT TOP 1 id FROM customer WHERE selected = 0)
BEGIN
UPDATE customer SET selected = 0
END
SELECT #id = id FROM
(SELECT TOP 1 id, ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS [RandomNumber]
FROM customer WHERE selected = 0
ORDER BY [RandomNumber]) a
UPDATE customer SET selected = 1 WHERE id = #id
SELECT id, name FROM customer WHERE id = #id
Ok so I think I must be misunderstanding something about SQL queries. This is a pretty wordy question, so thanks for taking the time to read it (my problem is right at the end, everything else is just context).
I am writing an accounting system that works on the double-entry principal -- money always moves between accounts, a transaction is 2 or more TransactionParts rows decrementing one account and incrementing another.
Some TransactionParts rows may be flagged as tax related so that the system can produce a report of total VAT sales/purchases etc, so it is possible that a single Transaction may have two TransactionParts referencing the same Account -- one VAT related, and the other not. To simplify presentation to the user, I have a view to combine multiple rows for the same account and transaction:
create view Accounting.CondensedEntryView as
select p.[Transaction], p.Account, sum(p.Amount) as Amount
from Accounting.TransactionParts p
group by p.[Transaction], p.Account
I then have a view to calculate the running balance column, as follows:
create view Accounting.TransactionBalanceView as
with cte as
(
select ROW_NUMBER() over (order by t.[Date]) AS RowNumber,
t.ID as [Transaction], p.Amount, p.Account
from Accounting.Transactions t
inner join Accounting.CondensedEntryView p on p.[Transaction]=t.ID
)
select b.RowNumber, b.[Transaction], a.Account,
coalesce(sum(a.Amount), 0) as Balance
from cte a, cte b
where a.RowNumber <= b.RowNumber AND a.Account=b.Account
group by b.RowNumber, b.[Transaction], a.Account
For reasons I haven't yet worked out, a certain transaction (ID=30) doesn't appear on an account statement for the user. I confirmed this by running
select * from Accounting.TransactionBalanceView where [Transaction]=30
This gave me the following result:
RowNumber Transaction Account Balance
-------------------- ----------- ------- ---------------------
72 30 23 143.80
As I said before, there should be at least two TransactionParts for each Transaction, so one of them isn't being presented in my view. I assumed there must be an issue with the way I've written my view, and run a query to see if there's anything else missing:
select [Transaction], count(*)
from Accounting.TransactionBalanceView
group by [Transaction]
having count(*) < 2
This query returns no results -- not even for Transaction 30! Thinking I must be an idiot I run the following query:
select [Transaction]
from Accounting.TransactionBalanceView
where [Transaction]=30
It returns two rows! So select * returns only one row and select [Transaction] returns both. After much head-scratching and re-running the last two queries, I concluded I don't have the faintest idea what's happening. Any ideas?
Thanks a lot if you've stuck with me this far!
Edit:
Here are the execution plans:
select *
select [Transaction]
1000 lines each, hence finding somewhere else to host.
Edit 2:
For completeness, here are the tables I used:
create table Accounting.Accounts
(
ID smallint identity primary key,
[Name] varchar(50) not null
constraint UQ_AccountName unique,
[Type] tinyint not null
constraint FK_AccountType foreign key references Accounting.AccountTypes
);
create table Accounting.Transactions
(
ID int identity primary key,
[Date] date not null default getdate(),
[Description] varchar(50) not null,
Reference varchar(20) not null default '',
Memo varchar(1000) not null
);
create table Accounting.TransactionParts
(
ID int identity primary key,
[Transaction] int not null
constraint FK_TransactionPart foreign key references Accounting.Transactions,
Account smallint not null
constraint FK_TransactionAccount foreign key references Accounting.Accounts,
Amount money not null,
VatRelated bit not null default 0
);
Demonstration of possible explanation.
Create table Script
SELECT *
INTO #T
FROM master.dbo.spt_values
CREATE NONCLUSTERED INDEX [IX_T] ON #T ([name] DESC,[number] DESC);
Query one (Returns 35 results)
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY NAME) AS rn
FROM #T
)
SELECT c1.number,c1.[type]
FROM cte c1
JOIN cte c2 ON c1.rn=c2.rn AND c1.number <> c2.number
Query Two (Same as before but adding c2.[type] to the select list makes it return 0 results)
;
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY NAME) AS rn
FROM #T
)
SELECT c1.number,c1.[type] ,c2.[type]
FROM cte c1
JOIN cte c2 ON c1.rn=c2.rn AND c1.number <> c2.number
Why?
row_number() for duplicate NAMEs isn't specified so it just chooses whichever one fits in with the best execution plan for the required output columns. In the second query this is the same for both cte invocations, in the first one it chooses a different access path with resultant different row_numbering.
Suggested Solution
You are self joining the CTE on ROW_NUMBER() over (order by t.[Date])
Contrary to what may have been expected the CTE will likely not be materialised which would have ensured consistency for the self join and thus you assume a correlation between ROW_NUMBER() on both sides that may well not exist for records where a duplicate [Date] exists in the data.
What if you try ROW_NUMBER() over (order by t.[Date], t.[id]) to ensure that in the event of tied dates the row_numbering is in a guaranteed consistent order. (Or some other column/combination of columns that can differentiate records if id won't do it)
If the purpose of this part of the view is just to make sure that the same row isn't joined to itself
where a.RowNumber <= b.RowNumber
then how does changing this part to
where a.RowNumber <> b.RowNumber
affect the results?
It seems you read dirty entries. (Someone else deletes/insertes new data)
try SET TRANSACTION ISOLATION LEVEL READ COMMITTED.
i've tried this code (seems equal to yours)
IF object_id('tempdb..#t') IS NOT NULL DROP TABLE #t
CREATE TABLE #t(i INT, val INT, acc int)
INSERT #t
SELECT 1, 2, 70
UNION ALL SELECT 2, 3, 70
;with cte as
(
select ROW_NUMBER() over (order by t.i) AS RowNumber,
t.val as [Transaction], t.acc Account
from #t t
)
select b.RowNumber, b.[Transaction], a.Account
from cte a, cte b
where a.RowNumber <= b.RowNumber AND a.Account=b.Account
group by b.RowNumber, b.[Transaction], a.Account
and got two rows
RowNumber Transaction Account
1 2 70
2 3 70