I am building a leaderboard for some of my online games. Here is what I need to do with the data:
Get rank of a player for a given game across multiple time frame (today, last week, all time, etc.)
Get paginated ranking (e.g. top score for last 24 hrs., get players between rank 25 and 50, get rank or a single user)
I defined with the following table definition and index and I have a couple of questions.
Considering my scenarios, do I have a good primary key? The reason why I have a clustered key across gameId, playerName and score is simply because I want to make sure that all data for a given game is in the same area and that score is already sorted. Most of the time I will display the data is descending order of score (+ updatedDateTime for ties) for a given gameId. Is this a right strategy? In other words, I want to make sure that I can run my queries to get the rank of my players as fast as possible.
CREATE TABLE score (
[gameId] [smallint] NOT NULL,
[playerName] [nvarchar](50) NOT NULL,
[score] [int] NOT NULL,
[createdDateTime] [datetime2](3) NOT NULL,
[updatedDateTime] [datetime2](3) NOT NULL,
PRIMARY KEY CLUSTERED ([gameId] ASC, [playerName] ASC, [score] DESC, [updatedDateTime] ASC)
CREATE NONCLUSTERED INDEX [Score_Idx] ON score ([gameId] ASC, [score] DESC, [updatedDateTime] ASC) INCLUDE ([playerName])
Below is the first iteration of the query I will be using to get the rank of my players. However, I am a bit disappointed by the execution plan (see below). Why does SQL need to sort? The additional sort seem to come from the RANK function. But isn’t my data already sorted in descending order (based on the clustered key of the score table)? I am also wondering if I should normalize a bit more my table and move out the PlayerName column in a Player table. I originally decided to keep everything in the same table to minimize the number of joins.
DECLARE #GameId AS INT = 0
DECLARE #From AS DATETIME2(3) = '2013-10-01'
SELECT DENSE_RANK() OVER (ORDER BY Score DESC), s.PlayerName, s.Score, s.CountryCode, s.updatedDateTime
FROM [mrgleaderboard].[score] s
WHERE s.GameId = #GameId
AND (s.UpdatedDateTime >= #From OR #From IS NULL)
Thank you for the help!
[Updated]
Primary key is not good
You have a unique entity that is [GameID] + [PlayerName]. And composite clustered Index > 120 bytes with nvarchar. Look for the answer by #marc_s in the related topic SQL Server - Clustered index design for dictionary
Your table schema does not match of your requirements to time periods
Ex.: I earned 300 score on Wednesday and this score stored on leaderboard. Next day I earned 250 score, but it will not record on leaderboard and you don't get results if I run a query to Tuesday leaderboard
For complete information you can get from a historical table games played score but it can be very expensive
CREATE TABLE GameLog (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_GameLog] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL)
Here are solutions to accelerate it related with the aggregation:
Indexed view on historical table (see post by #Twinkles).
You need 3 indexed view for the 3 time periods. Potentially huge size of historical tables and 3 indexed view. Unable to remove the "old" periods of the table. Performance issue to save score.
Asynchronous leaderboard
Scores saved in the historical table. SQL job/"Worker" (or several) according to schedule (1 per minute?) sorts historical table and populates the leaderboards table (3 tables for 3 time period or one table with time period key) with the precalculated rank of a user. This table also can be denormalized (have score, datetime, PlayerName and ...). Pros: Fast reading (without sorting), fast save score, any time periods, flexible logic and flexible schedules. Cons: The user has finished the game but did not found immediately himself on the leaderboard
Preaggregated leaderboard
During recording the results of the game session do pre-treatment. In your case something like UPDATE [Leaderboard] SET score = #CurrentScore WHERE #CurrentScore > MAX (score) AND ... for the player / game id but you did it only for "All time" leaderboard. The scheme might look like this:
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL
)
playerId timePeriod timePeriodFrom Score
----------------------------------------------
1 0 1900-01-01 300
...
1 1 2013-10-01 150
1 1 2013-11-01 300
...
1 2 2013-10-07 150
1 2 2013-11-18 300
...
1 3 2013-11-19 300
1 3 2013-11-20 250
...
So, you have to update all 3 score for all time period. Also as you can see leaderboard will contain "old" periods, such as monthly of October. Maybe you have to delete it if you do not need this statistics. Pros: Does not need a historical table. Cons: Complicated procedure for storing the result. Need maintenance of leaderboard. Query requires sorting and JOIN
CREATE TABLE [Player] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Player] PRIMARY KEY CLUSTERED,
[playerName] nvarchar(50) NOT NULL CONSTRAINT [UQ_Player_playerName] UNIQUE NONCLUSTERED)
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3)
)
CREATE UNIQUE NONCLUSTERED INDEX [UQ_Leaderboard_gameId_playerId_timePeriod_timePeriodFrom] ON [Leaderboard] ([gameId] ASC, [playerId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC)
CREATE NONCLUSTERED INDEX [IX_Leaderboard_gameId_timePeriod_timePeriodFrom_Score] ON [Leaderboard] ([gameId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC, [score] ASC)
GO
-- Generate test data
-- Generate 500K unique players
;WITH digits (d) AS (SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION
SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 0)
INSERT INTO Player (playerName)
SELECT TOP (500000) LEFT(CAST(NEWID() as nvarchar(50)), 20 + (ABS(CHECKSUM(NEWID())) & 15)) as Name
FROM digits CROSS JOIN digits ii CROSS JOIN digits iii CROSS JOIN digits iv CROSS JOIN digits v CROSS JOIN digits vi
-- Random score 500K players * 4 games = 2M rows
INSERT INTO [Leaderboard] (
[gameId],[playerId],[timePeriod],[timePeriodFrom],[score],[createdDateTime])
SELECT GameID, Player.id,ABS(CHECKSUM(NEWID())) & 3 as [timePeriod], DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Updated, ABS(CHECKSUM(NEWID())) & 65535 as score
, DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Created
FROM ( SELECT 1 as GameID UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4) as Game
CROSS JOIN Player
ORDER BY NEWID()
UPDATE [Leaderboard] SET [timePeriodFrom]='19000101' WHERE [timePeriod] = 0
GO
DECLARE #From date = '19000101'--'20131108'
,#GameID int = 3
,#timePeriod tinyint = 0
-- Get paginated ranking
;With Lb as (
SELECT
DENSE_RANK() OVER (ORDER BY Score DESC) as Rnk
,Score, createdDateTime, playerId
FROM [Leaderboard]
WHERE GameId = #GameId
AND [timePeriod] = #timePeriod
AND [timePeriodFrom] = #From)
SELECT lb.rnk,lb.Score, lb.createdDateTime, lb.playerId, Player.playerName
FROM Lb INNER JOIN Player ON lb.playerId = Player.id
ORDER BY rnk OFFSET 75 ROWS FETCH NEXT 25 ROWS ONLY;
-- Get rank of a player for a given game
SELECT (SELECT COUNT(DISTINCT rnk.score)
FROM [Leaderboard] as rnk
WHERE rnk.GameId = #GameId
AND rnk.[timePeriod] = #timePeriod
AND rnk.[timePeriodFrom] = #From
AND rnk.score >= [Leaderboard].score) as rnk
,[Leaderboard].Score, [Leaderboard].createdDateTime, [Leaderboard].playerId, Player.playerName
FROM [Leaderboard] INNER JOIN Player ON [Leaderboard].playerId = Player.id
where [Leaderboard].GameId = #GameId
AND [Leaderboard].[timePeriod] = #timePeriod
AND [Leaderboard].[timePeriodFrom] = #From
and Player.playerName = N'785DDBBB-3000-4730-B'
GO
This is only an example for the presentation of ideas. It can be optimized. For example, combining columns GameID, TimePeriod, TimePeriodDate to one column through the dictionary table. The effectiveness of the index will be higher.
P.S. Sorry for my English. Feel free to fix grammatical or spelling errors
You could look into indexed views to create scoreboards for common time ranges (today, this week/month/year, all-time).
to get the rank of a player for a given game across multiple timeframes, you will select the game and rank (i.e. sort) by score over a multiple timeframes. for this, your nonclustered index could be changed like this since this is the way your select seems to query.
CREATE NONCLUSTERED INDEX [Score_Idx]
ON score ([gameId] ASC, [updatedDateTime] ASC, [score] DESC)
INCLUDE ([playerName])
for the paginated ranking:
for the 24h-top score i guess you will want all the top scores of a single user across all games within the last 24h. for this you will be querying [playername], [updateddatetime] with [gameid].
for the players between rank 25-50, i assume you are talking about a single game and have a long ranking that you can page through. the query will then be based upon [gameid], [score] and a little on [updateddatetime] for the ties.
the single-user ranks, probably for each game, is a little more difficult. you will need to query the leaderboards for all games in order to get the player's rank in them and then filter on the player. you will need [gameid], [score], [updateddatetime] and then filter by player.
concluding all this, i propose you keep your nonclustered index and change the primary key to:
PRIMARY KEY CLUSTERED ([gameId] ASC, [score] DESC, [updatedDateTime] ASC)
for the 24h-top score i think this might help:
CREATE NONCLUSTERED INDEX [player_Idx]
ON score ([playerName] ASC)
INCLUDE ([gameId], [score])
the dense_rank query sorts because it selects [gameId], [updatedDateTime], [score]. see my comment on the nonclustered index above.
i would also think twice about including the [updatedDateTime] in your queries and subsequently in your indexes. maybe sometmes two players get the same rank, why not? [updatedDateTime] will let your index swell up significantly.
also you might think about partitioning tables by [gameid].
As a bit of a sidetrack:
Ask yourself how accurate and how up to date do the scores in the leaderboard actually need to be?
As a player I don't care if I'm number 142134 in the world or number 142133. I do care if I beat my friends' exact score (but then I only need my score compared to a couple of other scores) and I want to know that my new highscore sends me from somewhere around 142000 to somewhere around 90000. (Yay!)
So if you want really fast leaderboards, you do not actually need all data to be up to date. You could daily or hourly compute a static sorted copy of the leaderboard and when showing player X's score, show at what rank it'd fit in the static copy.
When comparing to friends, last minute updates do matter, but you're dealing with only a couple hundred scores, so you can look up their actual scores in the up to date leaderboards.
Oh, and I care about the top 10 of course. Consider them my "friends" merely based on the fact that they scored so well, and show these values up to date.
Your clustered index is composite so it means that order is defined by more than one column. You request ORDER BY Score which is the 2nd column in the clustered index. For that reason, entries in the index are not necessarily in the order of Score, e.g. entries
1, 2, some date
2, 1, some other date
If you select just Score, the order will be
2
1
which needs to be sorted.
i would not put the "score" column into the clustered index because it will probably change all the time ... and updates on a column that's part of the clustered index will be expensive
Related
I get the following error when I want to execute a SQL query:
"Msg 209, Level 16, State 1, Line 9
Ambiguous column name 'i_id'."
This is the SQL query I want to execute:
SELECT DISTINCT x.*
FROM items x LEFT JOIN items y
ON y.i_id = x.i_id
AND x.last_seen < y.last_seen
WHERE x.last_seen > '4-4-2017 10:54:11'
AND x.spot = 'spot773'
AND (x.technology = 'Bluetooth LE' OR x.technology = 'EPC Gen2')
AND y.id IS NULL
GROUP BY i_id
This is how my table looks like:
CREATE TABLE [dbo].[items] (
[id] INT IDENTITY (1, 1) NOT NULL,
[i_id] VARCHAR (100) NOT NULL,
[last_seen] DATETIME2 (0) NOT NULL,
[location] VARCHAR (200) NOT NULL,
[code_hex] VARCHAR (100) NOT NULL,
[technology] VARCHAR (100) NOT NULL,
[url] VARCHAR (100) NOT NULL,
[spot] VARCHAR (200) NOT NULL,
PRIMARY KEY CLUSTERED ([id] ASC));
I've tried a couple of things but I'm not an SQL expert:)
Any help would be appreciated
EDIT:
I do get duplicate rows when I remove the GROUP BY line as you can see:
I'm adding another answer in order to show how you'd typically select the lastest record per group without getting duplicates. You's use ROW_NUMBER for this, marking every last record per i_id with row number 1.
SELECT *
FROM
(
SELECT
i.*,
ROW_NUMBER() over (PARTITION BY i_id ORDER BY last_seen DESC) as rn
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
) ranked
WHERE rn = 1;
(You'd use RANK or DENSE_RANK instead of ROW_NUMBER if you wanted duplicates.)
You forgot the table alias in GROUP BY i_id.
Anyway, why are you writing an anti join query where you are trying to get rid of duplicates with both DISTINCT and GROUP BY? Did you have issues with a straight-forward NOT EXISTS query? You are making things way more complicated than they actually are.
SELECT *
FROM items i
WHERE last_seen > '2017-04-04 10:54:11'
AND spot = 'spot773'
AND technology IN ('Bluetooth LE', 'EPC Gen2')
AND NOT EXISTS
(
SELECT *
FROM items other
WHERE i.i_id = other.i_id
AND i.last_seen < other.last_seen
);
(There are other techniques of course to get the last seen record per i_id. This is one; another is to compare with MAX(last_seen); another is to use ROW_NUMBER.)
Ok, so I have a table that's just become a monster. And querying on it has become insanely slow for some of our customers. Here's the table in question:
CREATE TABLE [EventTime](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[EventId] [bigint] NOT NULL,
[Time] [datetime] NOT NULL,
CONSTRAINT [PK_EventTime] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
)
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
[EventId] ASC
)
It has a FK to the Events table. An event is action taken from a certian user, ip, service, and accountId. This EventTime table tells us what events happened at what time. An event can happen today at 3am and also 12pm last week. The idea is to not duplicate event rows.
Now this EventTime table has become massive for some customers; our biggest being 240mill rows and growing. And querying it has become insanely slow when looking at a time set > a few days. Here's the query we're executing today (Note: I'm running queries locally from a rip of the DB to minimize network latency or TO's caused by collectors hitting the DB):
SELECT
a.TrailId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH (NOLOCK) INNER JOIN [Event] a WITH (NOLOCK) ON a.Id = b.EventId
WHERE
a.TrailId IN (1, 2, 3, 4, 5) AND
a.NameId IN (6) AND
b.[Time] >= '2014-10-29 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000'
ORDER BY b.[Time] ASC
Note, trailId is a column in the Event table that tells us what customer to filter down to in the query. We have the list of TrailIds before we execute this query. Now this query very slow, about 45mins to execute. Here's some queries I've tried:
SELECT
a.EventId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH(NOLOCK)
JOIN [Event] a WITH(NOLOCK) on a.Id = b.EventId
WHERE
b.EventId IN (SELECT Id from [Event] where TrailId IN (1, 2, 3, 4, 5) AND NameId IN (6) ) AND
b.[Time] >= '2014-08-01 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000' AND
ORDER BY b.[Time] ASC
subquery worked well for small queries but for larger date ranges the performance suffered greatly. Next I tried
DECLARE #ListofIDs TABLE(Ids bigint)
INSERT INTO #ListofIDs (Ids)
SELECT Id from Event where TrailId IN (140, 629, 630, 631, 632) AND NameId IN (468)
SELECT
a.EventId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH(NOLOCK)
JOIN [Event] a WITH(NOLOCK) on a.Id = b.EventId
WHERE
b.EventId IN (SELECT Ids FROM #ListofIDs) AND
b.[Time] >= '2014-08-01 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000' AND
ORDER BY b.[Time] ASC
Casting my subquery into a table array for my main query to reference did help a bit. The query took about 33mins. But's it's still way way too slow =/
Next I tried playing with indexes. I figured I might have been putting too much into one index. So I dropped the existing and broke it out into two.
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
)
GO
CREATE NONCLUSTERED INDEX [IX_EventTime_Event] ON [EventTime]
(
[EventId] ASC
)
This didn't seem to do anything. Same query times.
I think the core issue is, this table is just very unorganized. The Time column has very specific time values and none of them are in order. For example, customer 8's collector might be saving EventTimes for 2014-11-12 04:12:01.000 and customer 10 is saving 2015-03-15 13:59:21.000. So the query has to process and sort all these dates prior to filtering down. So indexing [Time] probably isn't effective at all.
Anyone have any ideas on how I can speed this up?
This is your query:
SELECT e.TrailId, e.[NameId], e.[ResourceId], e.[AccountId], e.[ServiceId]
FROM [EventTime] et WITH (NOLOCK) INNER JOIN
[Event] e WITH (NOLOCK)
ON e.Id = et.EventId
WHERE e.TrailId IN (1, 2, 3, 4, 5) AND
e.NameId = 6 AND
et.[Time] >= '2014-10-29 00:00:00.000' AND
et.[Time] <= '2014-11-12 23:59:59.000'
ORDER BY et.[Time] ASC
The best indexes for this query are probably: Event(NameId, TrailId), EventTime(EventId, Time). This assume that the result set is not humongous (tens of millions of rows), in which case an optimization to get rid of the order by would be desirable.
I would ditch the ID column and make the primary key a composite clustered one on EventId and Time:
CREATE TABLE [EventTime](
[EventId] [bigint] NOT NULL,
[Time] [datetime] NOT NULL,
CONSTRAINT [PK_EventTime] PRIMARY KEY CLUSTERED
(
[EventId] ASC
, [Time] ASC
)
)
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
[EventId] ASC
);
Check the execution plans to see of the non-clustered index is used and drop it of not needed.
EDITED:
I have the following table in database with around 10 millions records:
Declaration:
create table PropertyOwners (
[Key] int not null primary key,
PropertyKey int not null,
BoughtDate DateTime,
OwnerKey int null,
GroupKey int null
)
go
[Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
With the following index:
CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]
(
[PropertyKey] ASC,
[BoughtDate] DESC,
[IsGroup] ASC
)
INCLUDE ( [OwnerKey], [GroupKey])
go
Description of the case:
For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "IsGroup" in Rank function.
declare #ownerKey int = 40000
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from (
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, IsGroup) as [Rank]
from PropertyOwners
) as result
where result.[Rank]=1 and result.[OwnerKey]=#ownerKey
It is taking 2-3 seconds to get the records when ever I use the [Rank]=1 with any of PropertyKey/OwnerKey/GroupKey. But when I tried to get the records for the PropertyKey/OwnerKey/GroupKey without using [Rank]=1 in the same query, it is executing in milliseconds. See following query:
declare #ownerKey int = 40000
select PropertyKey, BoughtDate, OwnerKey, GroupKey
from (
select PropertyKey, BoughtDate, OwnerKey, GroupKey,
RANK() over (partition by PropertyKey order by BoughtDate desc, IsGroup) as [Rank]
from PropertyOwners
) as result
where result.[OwnerKey]=#ownerKey
I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed view.
Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be
greatly appreciated.
If I understood your query correctly, it basically does as follows: "for a given owner, return all properties for which this owner is the latest one".
This can also be achieved in other ways, without ranking the entire 10M table, such as:
select po.*
from dbo.PropertyOwners po
where po.OwnerKey = #OwnerKey
and not exists (
select 0 from dbo.PropertyOwners lo
where lo.PropertyKey = po.PropertyKey
and lo.BoughtDate > po.BoughtDate
-- Other group-related conditions here, if need be
);
Essentially the same, just a little bit different wording:
select po.*
from dbo.PropertyOwners po
left join dbo.PropertyOwners lo on lo.PropertyKey = po.PropertyKey
and lo.BoughtDate > po.BoughtDate
-- Other group-related conditions here, if need be
where po.OwnerKey = #OwnerKey
and lo.PropertyKey is null;
You will definitely need different indices for these, and I can't be sure they will help. But at least give it a try.
I've got a table that looks like this (I wasn't sure what all might be relevant, so I had Toad dump the whole structure)
CREATE TABLE [dbo].[TScore] (
[CustomerID] int NOT NULL,
[ApplNo] numeric(18, 0) NOT NULL,
[BScore] int NULL,
[OrigAmt] money NULL,
[MaxAmt] money NULL,
[DateCreated] datetime NULL,
[UserCreated] char(8) NULL,
[DateModified] datetime NULL,
[UserModified] char(8) NULL,
CONSTRAINT [PK_TScore]
PRIMARY KEY CLUSTERED ([CustomerID] ASC, [ApplNo] ASC)
);
And when I run the following query (on a database with 3 million records in the TScore table) it takes about a second to run, even though if I just do: Select BScore from CustomerDB..TScore WHERE CustomerID = 12345, it is instant (and only returns 10 records) -- seems like there should be some efficient way to do the Max(ApplNo) effect in a single query, but I'm a relative noob to SQL Server, and not sure -- I'm thinking I may need a separate key for ApplNo, but not sure how clustered keys work.
SELECT BScore
FROM CustomerDB..TScore (NOLOCK)
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2 (NOLOCK)
WHERE sc2.CustomerID = 12345)
Thanks much for any tips (pointers on where to look for optimization of sql server stuff appreciated as well)
When you filter by ApplNo, you are using only part of the key. And not the left hand side. This means the index has be scanned (look at all rows) not seeked (drill to a row) to find the values.
If you are looking for ApplNo values for the same CustomerID:
Quick way. Use the full clustered index:
SELECT BScore
FROM CustomerDB..TScore
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345)
AND CustomerID = 12345
This can be changed into a JOIN
SELECT BScore
FROM
CustomerDB..TScore T1
JOIN
(SELECT Max(ApplNo) AS MaxApplNo, CustomerID
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345
) T2 ON T1.CustomerID = T2.CustomerID AND T1.ApplNo= T2.MaxApplNo
If you are looking for ApplNo values independent of CustomerID, then I'd look at a separate index. This matches your intent of the current code
CREATE INDEX IX_ApplNo ON TScore (ApplNo) INCLUDE (BScore);
Reversing the key order won't help because then your WHERE sc2.CustomerID = 12345 will scan, not seek
Note: using NOLOCK everywhere is a bad practice
Ok so I think I must be misunderstanding something about SQL queries. This is a pretty wordy question, so thanks for taking the time to read it (my problem is right at the end, everything else is just context).
I am writing an accounting system that works on the double-entry principal -- money always moves between accounts, a transaction is 2 or more TransactionParts rows decrementing one account and incrementing another.
Some TransactionParts rows may be flagged as tax related so that the system can produce a report of total VAT sales/purchases etc, so it is possible that a single Transaction may have two TransactionParts referencing the same Account -- one VAT related, and the other not. To simplify presentation to the user, I have a view to combine multiple rows for the same account and transaction:
create view Accounting.CondensedEntryView as
select p.[Transaction], p.Account, sum(p.Amount) as Amount
from Accounting.TransactionParts p
group by p.[Transaction], p.Account
I then have a view to calculate the running balance column, as follows:
create view Accounting.TransactionBalanceView as
with cte as
(
select ROW_NUMBER() over (order by t.[Date]) AS RowNumber,
t.ID as [Transaction], p.Amount, p.Account
from Accounting.Transactions t
inner join Accounting.CondensedEntryView p on p.[Transaction]=t.ID
)
select b.RowNumber, b.[Transaction], a.Account,
coalesce(sum(a.Amount), 0) as Balance
from cte a, cte b
where a.RowNumber <= b.RowNumber AND a.Account=b.Account
group by b.RowNumber, b.[Transaction], a.Account
For reasons I haven't yet worked out, a certain transaction (ID=30) doesn't appear on an account statement for the user. I confirmed this by running
select * from Accounting.TransactionBalanceView where [Transaction]=30
This gave me the following result:
RowNumber Transaction Account Balance
-------------------- ----------- ------- ---------------------
72 30 23 143.80
As I said before, there should be at least two TransactionParts for each Transaction, so one of them isn't being presented in my view. I assumed there must be an issue with the way I've written my view, and run a query to see if there's anything else missing:
select [Transaction], count(*)
from Accounting.TransactionBalanceView
group by [Transaction]
having count(*) < 2
This query returns no results -- not even for Transaction 30! Thinking I must be an idiot I run the following query:
select [Transaction]
from Accounting.TransactionBalanceView
where [Transaction]=30
It returns two rows! So select * returns only one row and select [Transaction] returns both. After much head-scratching and re-running the last two queries, I concluded I don't have the faintest idea what's happening. Any ideas?
Thanks a lot if you've stuck with me this far!
Edit:
Here are the execution plans:
select *
select [Transaction]
1000 lines each, hence finding somewhere else to host.
Edit 2:
For completeness, here are the tables I used:
create table Accounting.Accounts
(
ID smallint identity primary key,
[Name] varchar(50) not null
constraint UQ_AccountName unique,
[Type] tinyint not null
constraint FK_AccountType foreign key references Accounting.AccountTypes
);
create table Accounting.Transactions
(
ID int identity primary key,
[Date] date not null default getdate(),
[Description] varchar(50) not null,
Reference varchar(20) not null default '',
Memo varchar(1000) not null
);
create table Accounting.TransactionParts
(
ID int identity primary key,
[Transaction] int not null
constraint FK_TransactionPart foreign key references Accounting.Transactions,
Account smallint not null
constraint FK_TransactionAccount foreign key references Accounting.Accounts,
Amount money not null,
VatRelated bit not null default 0
);
Demonstration of possible explanation.
Create table Script
SELECT *
INTO #T
FROM master.dbo.spt_values
CREATE NONCLUSTERED INDEX [IX_T] ON #T ([name] DESC,[number] DESC);
Query one (Returns 35 results)
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY NAME) AS rn
FROM #T
)
SELECT c1.number,c1.[type]
FROM cte c1
JOIN cte c2 ON c1.rn=c2.rn AND c1.number <> c2.number
Query Two (Same as before but adding c2.[type] to the select list makes it return 0 results)
;
WITH cte AS
(
SELECT *, ROW_NUMBER() OVER (ORDER BY NAME) AS rn
FROM #T
)
SELECT c1.number,c1.[type] ,c2.[type]
FROM cte c1
JOIN cte c2 ON c1.rn=c2.rn AND c1.number <> c2.number
Why?
row_number() for duplicate NAMEs isn't specified so it just chooses whichever one fits in with the best execution plan for the required output columns. In the second query this is the same for both cte invocations, in the first one it chooses a different access path with resultant different row_numbering.
Suggested Solution
You are self joining the CTE on ROW_NUMBER() over (order by t.[Date])
Contrary to what may have been expected the CTE will likely not be materialised which would have ensured consistency for the self join and thus you assume a correlation between ROW_NUMBER() on both sides that may well not exist for records where a duplicate [Date] exists in the data.
What if you try ROW_NUMBER() over (order by t.[Date], t.[id]) to ensure that in the event of tied dates the row_numbering is in a guaranteed consistent order. (Or some other column/combination of columns that can differentiate records if id won't do it)
If the purpose of this part of the view is just to make sure that the same row isn't joined to itself
where a.RowNumber <= b.RowNumber
then how does changing this part to
where a.RowNumber <> b.RowNumber
affect the results?
It seems you read dirty entries. (Someone else deletes/insertes new data)
try SET TRANSACTION ISOLATION LEVEL READ COMMITTED.
i've tried this code (seems equal to yours)
IF object_id('tempdb..#t') IS NOT NULL DROP TABLE #t
CREATE TABLE #t(i INT, val INT, acc int)
INSERT #t
SELECT 1, 2, 70
UNION ALL SELECT 2, 3, 70
;with cte as
(
select ROW_NUMBER() over (order by t.i) AS RowNumber,
t.val as [Transaction], t.acc Account
from #t t
)
select b.RowNumber, b.[Transaction], a.Account
from cte a, cte b
where a.RowNumber <= b.RowNumber AND a.Account=b.Account
group by b.RowNumber, b.[Transaction], a.Account
and got two rows
RowNumber Transaction Account
1 2 70
2 3 70