Need re-factoring / indexing advice for a very large table - sql

Ok, so I have a table that's just become a monster. And querying on it has become insanely slow for some of our customers. Here's the table in question:
CREATE TABLE [EventTime](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[EventId] [bigint] NOT NULL,
[Time] [datetime] NOT NULL,
CONSTRAINT [PK_EventTime] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
)
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
[EventId] ASC
)
It has a FK to the Events table. An event is action taken from a certian user, ip, service, and accountId. This EventTime table tells us what events happened at what time. An event can happen today at 3am and also 12pm last week. The idea is to not duplicate event rows.
Now this EventTime table has become massive for some customers; our biggest being 240mill rows and growing. And querying it has become insanely slow when looking at a time set > a few days. Here's the query we're executing today (Note: I'm running queries locally from a rip of the DB to minimize network latency or TO's caused by collectors hitting the DB):
SELECT
a.TrailId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH (NOLOCK) INNER JOIN [Event] a WITH (NOLOCK) ON a.Id = b.EventId
WHERE
a.TrailId IN (1, 2, 3, 4, 5) AND
a.NameId IN (6) AND
b.[Time] >= '2014-10-29 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000'
ORDER BY b.[Time] ASC
Note, trailId is a column in the Event table that tells us what customer to filter down to in the query. We have the list of TrailIds before we execute this query. Now this query very slow, about 45mins to execute. Here's some queries I've tried:
SELECT
a.EventId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH(NOLOCK)
JOIN [Event] a WITH(NOLOCK) on a.Id = b.EventId
WHERE
b.EventId IN (SELECT Id from [Event] where TrailId IN (1, 2, 3, 4, 5) AND NameId IN (6) ) AND
b.[Time] >= '2014-08-01 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000' AND
ORDER BY b.[Time] ASC
subquery worked well for small queries but for larger date ranges the performance suffered greatly. Next I tried
DECLARE #ListofIDs TABLE(Ids bigint)
INSERT INTO #ListofIDs (Ids)
SELECT Id from Event where TrailId IN (140, 629, 630, 631, 632) AND NameId IN (468)
SELECT
a.EventId, a.[NameId], a.[ResourceId], a.[AccountId], a.[ServiceId]
FROM [EventTime] b WITH(NOLOCK)
JOIN [Event] a WITH(NOLOCK) on a.Id = b.EventId
WHERE
b.EventId IN (SELECT Ids FROM #ListofIDs) AND
b.[Time] >= '2014-08-01 00:00:00.000' AND
b.[Time] <= '2014-11-12 23:59:59.000' AND
ORDER BY b.[Time] ASC
Casting my subquery into a table array for my main query to reference did help a bit. The query took about 33mins. But's it's still way way too slow =/
Next I tried playing with indexes. I figured I might have been putting too much into one index. So I dropped the existing and broke it out into two.
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
)
GO
CREATE NONCLUSTERED INDEX [IX_EventTime_Event] ON [EventTime]
(
[EventId] ASC
)
This didn't seem to do anything. Same query times.
I think the core issue is, this table is just very unorganized. The Time column has very specific time values and none of them are in order. For example, customer 8's collector might be saving EventTimes for 2014-11-12 04:12:01.000 and customer 10 is saving 2015-03-15 13:59:21.000. So the query has to process and sort all these dates prior to filtering down. So indexing [Time] probably isn't effective at all.
Anyone have any ideas on how I can speed this up?

This is your query:
SELECT e.TrailId, e.[NameId], e.[ResourceId], e.[AccountId], e.[ServiceId]
FROM [EventTime] et WITH (NOLOCK) INNER JOIN
[Event] e WITH (NOLOCK)
ON e.Id = et.EventId
WHERE e.TrailId IN (1, 2, 3, 4, 5) AND
e.NameId = 6 AND
et.[Time] >= '2014-10-29 00:00:00.000' AND
et.[Time] <= '2014-11-12 23:59:59.000'
ORDER BY et.[Time] ASC
The best indexes for this query are probably: Event(NameId, TrailId), EventTime(EventId, Time). This assume that the result set is not humongous (tens of millions of rows), in which case an optimization to get rid of the order by would be desirable.

I would ditch the ID column and make the primary key a composite clustered one on EventId and Time:
CREATE TABLE [EventTime](
[EventId] [bigint] NOT NULL,
[Time] [datetime] NOT NULL,
CONSTRAINT [PK_EventTime] PRIMARY KEY CLUSTERED
(
[EventId] ASC
, [Time] ASC
)
)
CREATE NONCLUSTERED INDEX [IX_EventTime_Main] ON [EventTime]
(
[Time] ASC,
[EventId] ASC
);
Check the execution plans to see of the non-clustered index is used and drop it of not needed.

Related

SQL Server, query using right index but still slow

I have a query as follows:
SELECT 1
FROM [Call]
INNER JOIN [Caller] ON [Call].callerId = [Caller].id
WHERE [Call].[phoneNumber] = #phoneNumber
AND
[Caller].callerType = #callerType
AND
[Call].[time] > #time
AND
[Call].[status] = #status
AND
[Call].[type] <> #type
There is a clustered primary key index on [Caller] id column. There is a non-clustered index on [Call] as follows:
CREATE INDEX IX_callerId_time_phonenumber_status_type
ON dbo.[Call]
(
[callerId] ASC,
[time] ASC,
[phoneNumber] ASC,
[status] ASC,
[type] ASC
)
I notice in the execution plan that 90% of the cost of my query is as follows:
Predicate:
[Call].[status] = 10 AND [Call].[type] <> 10
Object:
[Call].[IX_callerId_time_phonenumber_status_type]
So it's using the right index but I'm still getting bad performance. Any ideas?
Predicate [Call].[time] > #time is fairly unselective, but the structure of your index forces SQL Server to give it priority over some other criteria that are probably more selective. It likely chooses to scan a big chunk of the index for each callerId. Reordering the index like so would probably improve performance for this particular query:
CREATE INDEX IX_callerId_time_phonenumber_status_type
ON dbo.[Call]
(
[callerId] ASC,
[phoneNumber] ASC,
[status] ASC,
[time] ASC,
[type] ASC
)
Not knowing if "Time" is really a Time datatype or a varchar datatype, I would suggest the following index.
CREATE NONCLUSTERED INDEX IX_callerId_time_phonenumber_status_type
ON [dbo].[Call] ([PhoneNumber],[Status],[Time],[Type])
Have you ruled out parameter sniffing? Wondering if the optimizer is stuck on some not-good-for-the-values.
This is my fav article http://www.brentozar.com/archive/2013/06/the-elephant-and-the-mouse-or-parameter-sniffing-in-sql-server/

Leaderboard design using SQL Server

I am building a leaderboard for some of my online games. Here is what I need to do with the data:
Get rank of a player for a given game across multiple time frame (today, last week, all time, etc.)
Get paginated ranking (e.g. top score for last 24 hrs., get players between rank 25 and 50, get rank or a single user)
I defined with the following table definition and index and I have a couple of questions.
Considering my scenarios, do I have a good primary key? The reason why I have a clustered key across gameId, playerName and score is simply because I want to make sure that all data for a given game is in the same area and that score is already sorted. Most of the time I will display the data is descending order of score (+ updatedDateTime for ties) for a given gameId. Is this a right strategy? In other words, I want to make sure that I can run my queries to get the rank of my players as fast as possible.
CREATE TABLE score (
[gameId] [smallint] NOT NULL,
[playerName] [nvarchar](50) NOT NULL,
[score] [int] NOT NULL,
[createdDateTime] [datetime2](3) NOT NULL,
[updatedDateTime] [datetime2](3) NOT NULL,
PRIMARY KEY CLUSTERED ([gameId] ASC, [playerName] ASC, [score] DESC, [updatedDateTime] ASC)
CREATE NONCLUSTERED INDEX [Score_Idx] ON score ([gameId] ASC, [score] DESC, [updatedDateTime] ASC) INCLUDE ([playerName])
Below is the first iteration of the query I will be using to get the rank of my players. However, I am a bit disappointed by the execution plan (see below). Why does SQL need to sort? The additional sort seem to come from the RANK function. But isn’t my data already sorted in descending order (based on the clustered key of the score table)? I am also wondering if I should normalize a bit more my table and move out the PlayerName column in a Player table. I originally decided to keep everything in the same table to minimize the number of joins.
DECLARE #GameId AS INT = 0
DECLARE #From AS DATETIME2(3) = '2013-10-01'
SELECT DENSE_RANK() OVER (ORDER BY Score DESC), s.PlayerName, s.Score, s.CountryCode, s.updatedDateTime
FROM [mrgleaderboard].[score] s
WHERE s.GameId = #GameId
AND (s.UpdatedDateTime >= #From OR #From IS NULL)
Thank you for the help!
[Updated]
Primary key is not good
You have a unique entity that is [GameID] + [PlayerName]. And composite clustered Index > 120 bytes with nvarchar. Look for the answer by #marc_s in the related topic SQL Server - Clustered index design for dictionary
Your table schema does not match of your requirements to time periods
Ex.: I earned 300 score on Wednesday and this score stored on leaderboard. Next day I earned 250 score, but it will not record on leaderboard and you don't get results if I run a query to Tuesday leaderboard
For complete information you can get from a historical table games played score but it can be very expensive
CREATE TABLE GameLog (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_GameLog] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL)
Here are solutions to accelerate it related with the aggregation:
Indexed view on historical table (see post by #Twinkles).
You need 3 indexed view for the 3 time periods. Potentially huge size of historical tables and 3 indexed view. Unable to remove the "old" periods of the table. Performance issue to save score.
Asynchronous leaderboard
Scores saved in the historical table. SQL job/"Worker" (or several) according to schedule (1 per minute?) sorts historical table and populates the leaderboards table (3 tables for 3 time period or one table with time period key) with the precalculated rank of a user. This table also can be denormalized (have score, datetime, PlayerName and ...). Pros: Fast reading (without sorting), fast save score, any time periods, flexible logic and flexible schedules. Cons: The user has finished the game but did not found immediately himself on the leaderboard
Preaggregated leaderboard
During recording the results of the game session do pre-treatment. In your case something like UPDATE [Leaderboard] SET score = #CurrentScore WHERE #CurrentScore > MAX (score) AND ... for the player / game id but you did it only for "All time" leaderboard. The scheme might look like this:
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY
CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3) NOT NULL
)
playerId timePeriod timePeriodFrom Score
----------------------------------------------
1 0 1900-01-01 300
...
1 1 2013-10-01 150
1 1 2013-11-01 300
...
1 2 2013-10-07 150
1 2 2013-11-18 300
...
1 3 2013-11-19 300
1 3 2013-11-20 250
...
So, you have to update all 3 score for all time period. Also as you can see leaderboard will contain "old" periods, such as monthly of October. Maybe you have to delete it if you do not need this statistics. Pros: Does not need a historical table. Cons: Complicated procedure for storing the result. Need maintenance of leaderboard. Query requires sorting and JOIN
CREATE TABLE [Player] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Player] PRIMARY KEY CLUSTERED,
[playerName] nvarchar(50) NOT NULL CONSTRAINT [UQ_Player_playerName] UNIQUE NONCLUSTERED)
CREATE TABLE [Leaderboard] (
[id] int NOT NULL IDENTITY CONSTRAINT [PK_Leaderboard] PRIMARY KEY CLUSTERED,
[gameId] smallint NOT NULL,
[playerId] int NOT NULL,
[timePeriod] tinyint NOT NULL, -- 0 -all time, 1-monthly, 2 -weekly, 3 -daily
[timePeriodFrom] date NOT NULL, -- '1900-01-01' for all time, '2013-11-01' for monthly, etc.
[score] int NOT NULL,
[createdDateTime] datetime2(3)
)
CREATE UNIQUE NONCLUSTERED INDEX [UQ_Leaderboard_gameId_playerId_timePeriod_timePeriodFrom] ON [Leaderboard] ([gameId] ASC, [playerId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC)
CREATE NONCLUSTERED INDEX [IX_Leaderboard_gameId_timePeriod_timePeriodFrom_Score] ON [Leaderboard] ([gameId] ASC, [timePeriod] ASC, [timePeriodFrom] ASC, [score] ASC)
GO
-- Generate test data
-- Generate 500K unique players
;WITH digits (d) AS (SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION
SELECT 4 UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 UNION SELECT 0)
INSERT INTO Player (playerName)
SELECT TOP (500000) LEFT(CAST(NEWID() as nvarchar(50)), 20 + (ABS(CHECKSUM(NEWID())) & 15)) as Name
FROM digits CROSS JOIN digits ii CROSS JOIN digits iii CROSS JOIN digits iv CROSS JOIN digits v CROSS JOIN digits vi
-- Random score 500K players * 4 games = 2M rows
INSERT INTO [Leaderboard] (
[gameId],[playerId],[timePeriod],[timePeriodFrom],[score],[createdDateTime])
SELECT GameID, Player.id,ABS(CHECKSUM(NEWID())) & 3 as [timePeriod], DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Updated, ABS(CHECKSUM(NEWID())) & 65535 as score
, DATEADD(MILLISECOND, CHECKSUM(NEWID()),GETDATE()) as Created
FROM ( SELECT 1 as GameID UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4) as Game
CROSS JOIN Player
ORDER BY NEWID()
UPDATE [Leaderboard] SET [timePeriodFrom]='19000101' WHERE [timePeriod] = 0
GO
DECLARE #From date = '19000101'--'20131108'
,#GameID int = 3
,#timePeriod tinyint = 0
-- Get paginated ranking
;With Lb as (
SELECT
DENSE_RANK() OVER (ORDER BY Score DESC) as Rnk
,Score, createdDateTime, playerId
FROM [Leaderboard]
WHERE GameId = #GameId
AND [timePeriod] = #timePeriod
AND [timePeriodFrom] = #From)
SELECT lb.rnk,lb.Score, lb.createdDateTime, lb.playerId, Player.playerName
FROM Lb INNER JOIN Player ON lb.playerId = Player.id
ORDER BY rnk OFFSET 75 ROWS FETCH NEXT 25 ROWS ONLY;
-- Get rank of a player for a given game
SELECT (SELECT COUNT(DISTINCT rnk.score)
FROM [Leaderboard] as rnk
WHERE rnk.GameId = #GameId
AND rnk.[timePeriod] = #timePeriod
AND rnk.[timePeriodFrom] = #From
AND rnk.score >= [Leaderboard].score) as rnk
,[Leaderboard].Score, [Leaderboard].createdDateTime, [Leaderboard].playerId, Player.playerName
FROM [Leaderboard] INNER JOIN Player ON [Leaderboard].playerId = Player.id
where [Leaderboard].GameId = #GameId
AND [Leaderboard].[timePeriod] = #timePeriod
AND [Leaderboard].[timePeriodFrom] = #From
and Player.playerName = N'785DDBBB-3000-4730-B'
GO
This is only an example for the presentation of ideas. It can be optimized. For example, combining columns GameID, TimePeriod, TimePeriodDate to one column through the dictionary table. The effectiveness of the index will be higher.
P.S. Sorry for my English. Feel free to fix grammatical or spelling errors
You could look into indexed views to create scoreboards for common time ranges (today, this week/month/year, all-time).
to get the rank of a player for a given game across multiple timeframes, you will select the game and rank (i.e. sort) by score over a multiple timeframes. for this, your nonclustered index could be changed like this since this is the way your select seems to query.
CREATE NONCLUSTERED INDEX [Score_Idx]
ON score ([gameId] ASC, [updatedDateTime] ASC, [score] DESC)
INCLUDE ([playerName])
for the paginated ranking:
for the 24h-top score i guess you will want all the top scores of a single user across all games within the last 24h. for this you will be querying [playername], [updateddatetime] with [gameid].
for the players between rank 25-50, i assume you are talking about a single game and have a long ranking that you can page through. the query will then be based upon [gameid], [score] and a little on [updateddatetime] for the ties.
the single-user ranks, probably for each game, is a little more difficult. you will need to query the leaderboards for all games in order to get the player's rank in them and then filter on the player. you will need [gameid], [score], [updateddatetime] and then filter by player.
concluding all this, i propose you keep your nonclustered index and change the primary key to:
PRIMARY KEY CLUSTERED ([gameId] ASC, [score] DESC, [updatedDateTime] ASC)
for the 24h-top score i think this might help:
CREATE NONCLUSTERED INDEX [player_Idx]
ON score ([playerName] ASC)
INCLUDE ([gameId], [score])
the dense_rank query sorts because it selects [gameId], [updatedDateTime], [score]. see my comment on the nonclustered index above.
i would also think twice about including the [updatedDateTime] in your queries and subsequently in your indexes. maybe sometmes two players get the same rank, why not? [updatedDateTime] will let your index swell up significantly.
also you might think about partitioning tables by [gameid].
As a bit of a sidetrack:
Ask yourself how accurate and how up to date do the scores in the leaderboard actually need to be?
As a player I don't care if I'm number 142134 in the world or number 142133. I do care if I beat my friends' exact score (but then I only need my score compared to a couple of other scores) and I want to know that my new highscore sends me from somewhere around 142000 to somewhere around 90000. (Yay!)
So if you want really fast leaderboards, you do not actually need all data to be up to date. You could daily or hourly compute a static sorted copy of the leaderboard and when showing player X's score, show at what rank it'd fit in the static copy.
When comparing to friends, last minute updates do matter, but you're dealing with only a couple hundred scores, so you can look up their actual scores in the up to date leaderboards.
Oh, and I care about the top 10 of course. Consider them my "friends" merely based on the fact that they scored so well, and show these values up to date.
Your clustered index is composite so it means that order is defined by more than one column. You request ORDER BY Score which is the 2nd column in the clustered index. For that reason, entries in the index are not necessarily in the order of Score, e.g. entries
1, 2, some date
2, 1, some other date
If you select just Score, the order will be
2
1
which needs to be sorted.
i would not put the "score" column into the clustered index because it will probably change all the time ... and updates on a column that's part of the clustered index will be expensive

Grouping by timeframes with a modifier that changes over time

After poring over a similar problem and finding it never provided a complete solution I finally have gotten to the heart of the problem I can't solve. I'm looking for the consecutive amount of days that a person can be prescribed a certain amount of drugs. Because the prescriptions begin and end, there can be multiple, non-contiguous intervals that a person is on X number of drugs. The following SQL script produces the result set of the query I'll post momentarily: Also, I don't have SQL Server 2012.
create table test
(pat_id int, cal_date date, grp_nbr int, drug_qty int,[ranking] int)
go
insert into test(pat_id,cal_date, grp_nbr,drug_qty,[ranking])
values
(1, '1/8/2007',7,2, 1),
(1, '1/9/2007',7,2, 1),
(1, '1/10/2007',7, 2,1),
(1, '1/11/2007',7, 2,1),
(1, '1/12/2007',7, 2,1),
(1, '1/13/2007',7, 2,1),
(1, '1/14/2007',7, 2,1),
(1, '1/15/2007',7, 2,1),
(1, '6/1/2007',7,2, 1),
(1, '6/2/2007',7,2, 1),
(1, '6/3/2007',7,2, 1)
Notice here that there are two non-contiguous intervals where this person was on two drugs at once. In the days that are omitted,drug_qty was more than two. The last column in this example was my attempt at adding another field that I could group by to help solve the problem (didn't work).
Query to create tables:
CREATE TABLE [dbo].[rx](
[pat_id] [int] NOT NULL,
[fill_Date] [date] NOT NULL,
[script_End_Date] AS (dateadd(day,[dayssup],[filldate])),
[drug_Name] [varchar](50) NULL,
[days_Sup] [int] NOT NULL,
[quantity] [float] NOT NULL,
[drug_Class] [char](3) NOT NULL,
CHECK(fill_Date <=script_End_Date
PRIMARY KEY CLUSTERED
(
[clmid] ASC
)
CREATE TABLE [dbo].[Calendar](
[cal_date] [date] PRIMARY KEY,
[Year] AS YEAR(cal_date) PERSISTED,
[Month] AS MONTH(cal_date) PERSISTED,
[Day] AS DAY(cal_date) PERSISTED,
[julian_seq] AS 1+DATEDIFF(DD, CONVERT(DATE, CONVERT(varchar,YEAR(cal_date))+'0101'),cal_date),
id int identity);
the query I'm using to produce my result sets:
;WITH x
AS (SELECT rx.pat_id,
c.cal_date,
Count(DISTINCT rx.drug_name) AS distinctDrugs
FROM rx,
calendar AS c
WHERE c.cal_date BETWEEN rx.fill_date AND rx.script_end_date
AND rx.ofinterest = 1
GROUP BY rx.pat_id,
c.cal_date
--the query example I used having count(1) =2, but to illustrate the non-contiguous intervals, in practice I need the below having statement
HAVING Count(*) > 1),
y
AS (SELECT x.pat_id,
x.cal_date
--c2.id is the row number in the calendar table.
,
c2.id - Row_number()
OVER(
partition BY x.pat_id
ORDER BY x.cal_date) AS grp_nbr,
distinctdrugs
FROM x,
calendar AS c2
WHERE c2.cal_date = x.cal_date)
SELECT *,
Rank()
OVER(
partition BY pat_id, grp_nbr
ORDER BY distinctdrugs) AS [ranking]
FROM y
WHERE y.pat_id = 1604012867
AND distinctdrugs = 2
Besides the fact that I shouldn't have a column in the calendar table named 'id', is there anything egregiously wrong with this approach? I can get the query to show me the distinct intervals of distinctDrugs=x, but it will only work for that integer and not anything >1. By this I mean that I can find the separate intervals where a patient is on two drugs, but only when I use =2 in the having clause, not >1. I can't do something like
SELECT pat_id,
Min(cal_date),
Max(cal_date),
distinctdrugs
FROM y
GROUP BY pat_id,
grp_nbr
because this will pick up that second group of non-contiguous dates. Does anyone know of an elegant solution to this problem?
The key to this is a simple observation. If you have a sequence of dates, then the difference between them and an increasing sequence is constant. The following does this, assuming you are using SQL Server 2005 or greater:
select pat_id, MIN(cal_date), MAX(cal_date), MIN(drug_qty)
from (select t.*,
cast(cal_date as datetime) - ROW_NUMBER() over (partition by pat_id, drug_qty order by cal_date) as grouping
from #test t
) t
group by pat_id, grouping

SQL Server 2005 query optimization with Max subquery

I've got a table that looks like this (I wasn't sure what all might be relevant, so I had Toad dump the whole structure)
CREATE TABLE [dbo].[TScore] (
[CustomerID] int NOT NULL,
[ApplNo] numeric(18, 0) NOT NULL,
[BScore] int NULL,
[OrigAmt] money NULL,
[MaxAmt] money NULL,
[DateCreated] datetime NULL,
[UserCreated] char(8) NULL,
[DateModified] datetime NULL,
[UserModified] char(8) NULL,
CONSTRAINT [PK_TScore]
PRIMARY KEY CLUSTERED ([CustomerID] ASC, [ApplNo] ASC)
);
And when I run the following query (on a database with 3 million records in the TScore table) it takes about a second to run, even though if I just do: Select BScore from CustomerDB..TScore WHERE CustomerID = 12345, it is instant (and only returns 10 records) -- seems like there should be some efficient way to do the Max(ApplNo) effect in a single query, but I'm a relative noob to SQL Server, and not sure -- I'm thinking I may need a separate key for ApplNo, but not sure how clustered keys work.
SELECT BScore
FROM CustomerDB..TScore (NOLOCK)
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2 (NOLOCK)
WHERE sc2.CustomerID = 12345)
Thanks much for any tips (pointers on where to look for optimization of sql server stuff appreciated as well)
When you filter by ApplNo, you are using only part of the key. And not the left hand side. This means the index has be scanned (look at all rows) not seeked (drill to a row) to find the values.
If you are looking for ApplNo values for the same CustomerID:
Quick way. Use the full clustered index:
SELECT BScore
FROM CustomerDB..TScore
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345)
AND CustomerID = 12345
This can be changed into a JOIN
SELECT BScore
FROM
CustomerDB..TScore T1
JOIN
(SELECT Max(ApplNo) AS MaxApplNo, CustomerID
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345
) T2 ON T1.CustomerID = T2.CustomerID AND T1.ApplNo= T2.MaxApplNo
If you are looking for ApplNo values independent of CustomerID, then I'd look at a separate index. This matches your intent of the current code
CREATE INDEX IX_ApplNo ON TScore (ApplNo) INCLUDE (BScore);
Reversing the key order won't help because then your WHERE sc2.CustomerID = 12345 will scan, not seek
Note: using NOLOCK everywhere is a bad practice

Aggregate Function/Group-By Query Performance

This query works (thanks to those that helped) to generate a 30-day moving average of volume.
SELECT x.symbol, x.dseqkey, AVG(y.VOLUME) moving_average
FROM STOCK_HIST x, STOCK_HIST y
WHERE x.dseqkey>=29 AND x.dseqkey BETWEEN y.dseqkey AND y.dseqkey+29
AND Y.Symbol=X.Symbol
GROUP BY x.symbol, x.dseqkey
ORDER BY x.dseqkey DESC
However the performance is very bad. I am running the above against a view (STOCK_HIST) that brings two tables (A and B) together. Table A contains daily stock volume and the daily date for over 9,000 stocks dating back as far as 40 years (300+ rows, per year, per each of the 9,000 stocks). Table B is a "Date Key" table that links the date in table A to the DSEQKEY (int).
What are my options for performance improvement? I have heard that views are convenient but not performant. Should I just copy the columns needed from table A and B to a single table and then run the above query? I have indexes on the tables A and B on the stock symbol + date (A) and DSEQKEY (B).
Is it the view that's killing my performance? How can I improve this?
EDIT
By request, I have posted the 2 tables and the view below. Also, now there is one clustered index on the view and each table. I am open to any recommendations as this query that produces the deisred result, is still slow:
SELECT
x.symbol
, x.dseqkey
, AVG(y.VOLUME) moving_average
FROM STOCK_HIST x
JOIN STOCK_HIST y ON x.dseqkey BETWEEN y.dseqkey AND y.dseqkey+29 AND Y.Symbol=X.Symbol
WHERE x.dseqkey >= 15000
GROUP BY x.symbol, x.dseqkey
ORDER BY x.dseqkey DESC ;
HERE IS THE VIEW:
CREATE VIEW [dbo].[STOCK_HIST]
WITH SCHEMABINDING
AS
SELECT
dbo.DATE_MASTER.date
, dbo.DATE_MASTER.year
, dbo.DATE_MASTER.quarter
, dbo.DATE_MASTER.month
, dbo.DATE_MASTER.week
, dbo.DATE_MASTER.wday
, dbo.DATE_MASTER.day
, dbo.DATE_MASTER.nday
, dbo.DATE_MASTER.wkmax
, dbo.DATE_MASTER.momax
, dbo.DATE_MASTER.qtrmax
, dbo.DATE_MASTER.yrmax
, dbo.DATE_MASTER.dseqkey
, dbo.DATE_MASTER.wseqkey
, dbo.DATE_MASTER.mseqkey
, dbo.DATE_MASTER.qseqkey
, dbo.DATE_MASTER.yseqkey
, dbo.DATE_MASTER.tom
, dbo.QP_HISTORY.Symbol
, dbo.QP_HISTORY.[Open] as propen
, dbo.QP_HISTORY.High as prhigh
, dbo.QP_HISTORY.Low as prlow
, dbo.QP_HISTORY.[Close] as prclose
, dbo.QP_HISTORY.Volume
, dbo.QP_HISTORY.QRS
FROM dbo.DATE_MASTER
INNER JOIN dbo.QP_HISTORY ON dbo.DATE_MASTER.date = dbo.QP_HISTORY.QPDate ;
HERE IS DATE_MASTER TABLE:
CREATE TABLE [dbo].[DATE_MASTER] (
[date] [datetime] NULL
, [year] [int] NULL
, [quarter] [int] NULL
, [month] [int] NULL
, [week] [int] NULL
, [wday] [int] NULL
, [day] [int] NULL
, [nday] nvarchar NULL
, [wkmax] [bit] NOT NULL
, [momax] [bit] NOT NULL
, [qtrmax] [bit] NOT NULL
, [yrmax] [bit] NOT NULL
, [dseqkey] [int] IDENTITY(1,1) NOT NULL
, [wseqkey] [int] NULL
, [mseqkey] [int] NULL
, [qseqkey] [int] NULL
, [yseqkey] [int] NULL
, [tom] [bit] NOT NULL
) ON [PRIMARY] ;
HERE IS THE QP_HISTORY TABLE:
CREATE TABLE [dbo].[QP_HISTORY] (
[Symbol] varchar NULL
, [QPDate] [date] NULL
, [Open] [real] NULL
, [High] [real] NULL
, [Low] [real] NULL
, [Close] [real] NULL
, [Volume] [bigint] NULL
, [QRS] [smallint] NULL
) ON [PRIMARY] ;
HERE IS THE VIEW (STOCK_HIST) INDEX
CREATE UNIQUE CLUSTERED INDEX [ix_STOCK_HIST] ON [dbo].[STOCK_HIST]
(
[Symbol] ASC,
[dseqkey] ASC,
[Volume] ASC
)
HERE IS THE QP_HIST INDEX
CREATE UNIQUE CLUSTERED INDEX [IX_QP_HISTORY] ON [dbo].[QP_HISTORY]
(
[Symbol] ASC,
[QPDate] ASC,
[Close] ASC,
[Volume] ASC
)
HERE IS THE INDEX ON DATE_MASTER
CREATE UNIQUE CLUSTERED INDEX [IX_DATE_MASTER] ON [dbo].[DATE_MASTER]
(
[date] ASC,
[dseqkey] ASC,
[wseqkey] ASC,
[mseqkey] ASC
)
I do not have any primary keys setup. Would this help performance?
EDIT - After making suggested changes the query is slower than before. What ran in 10m 44s is currently at 30m and still running.
I made all of the requested changes except I did not change name of date in Date_Master and I did not drop the QPDate column from QP_Hist. (I have reasons for this and do not see it impacting the performance since I'm not referring to it in the query.)
REVISED QUERY
select x.symbol, x.dmdseqkey, avg(y.volume) as moving_average
from dbo.QP_HISTORY as x
join dbo.QP_HISTORY as y on (x.dmdseqkey between y.dmdseqkey and (y.dmdseqkey + 29))
and (y.symbol = x.symbol)
where x.dmdseqkey >= 20000
group by x.symbol, x.dmdseqkey
order by x.dmdseqkey desc ;
PK on QP_History
ALTER TABLE [dbo].[QP_HISTORY]
ADD CONSTRAINT [PK_QP_HISTORY] PRIMARY KEY CLUSTERED ([Symbol] ASC, DMDSeqKey] ASC)
FK on QP_History
ALTER TABLE [dbo].[QP_HISTORY] ADD CONSTRAINT [FK_QP_HISTORY_DATE_MASTER] FOREIGN KEY([DMDSeqKey]) REFERENCES [dbo].[DATE_MASTER] ([dseqkey])
PK on Date_Master
ALTER TABLE [dbo].[DATE_MASTER]
ADD CONSTRAINT [PK_DATE_MASTER] PRIMARY KEY CLUSTERED ([dseqkey] ASC)
EDIT
HERE IS THE EXECUTION PLAN
First, separate join an filter.
(edit: fixed ON clause)
SELECT x.symbol, x.dseqkey, AVG(y.VOLUME) moving_average
FROM
STOCK_HIST x
JOIN
STOCK_HIST y ON x.dseqkey BETWEEN y.dseqkey AND y.dseqkey+29
AND Y.Symbol=X.Symbol
WHERE x.dseqkey>=29
GROUP BY x.symbol, x.dseqkey
ORDER BY x.dseqkey DESC
Also, what indexes do you have - I'd suggest an index on (dseqkey, symbol) INCLUDE (VOLUME)
Edit 3: you can't have an INCLUDE in a clustered index, my bad. Your syntax is OK.
Please try these permutations... the aim is find the best index for the JOIN and WHERE, followed with the ORDER BY.
CREATE UNIQUE CLUSTERED INDEX [ix_STOCK_HIST] ON [dbo].[STOCK_HIST] (...
...[Symbol] ASC, [dseqkey] ASC, [Volume] ASC )
...[dseqkey] ASC, [Symbol] ASC, [Volume] ASC )
...[Symbol] ASC, [dseqkey] DESC, [Volume] ASC )
...[dseqkey] DESC, [Symbol] ASC, [Volume] ASC )
SQL Server does not support LAG or LEAD clauses available in Oracle and PostgreSQL, neither does it support session variables like MySQL.
Calculating aggregates against moving windows is a pain in SQL Server.
So God knows I hate to say this, however, in this case a CURSOR based solution may be more efficient.
try putting a clustered index on the view. that will make the view persisted to disk like a normal table and your tables won't have to be accessed every time.
that should speed things up a bit.
for better answer please post the link to your original question to see if a better solution can be found.
OK, so I'll start from the end. I would like to achieve this model.
With this in place, you can run the query on the history table directly, no need for the view and join to the dbo.DATE_MASTER.
select
x.symbol
, x.dseqkey
, avg(y.volume) as moving_average
from dbo.QP_HISTORY as x
join dbo.QP_HISTORY as y on (x.dSeqKey between y.dSeqKey and (y.dSeqKey + 29))
and (y.symbol = x.symbol)
where x.dseqkey >= 15000
group by x.symbol, x.dseqkey
order by x.dseqkey desc
OPTION (ORDER GROUP) ;
The QP_HISTORY is narrower than the STOCK_HISTORY view, so the query should be faster. The "redundant column removal" from joins is scheduled for the next generation of SQL Server (Denali), so for the time being narrower usually means faster -- at least for large tables. Also, the join on .. and the where clause nicely match the the PK(Symbol, dSeqKey).
Now, how to achieve this:
a) Modify the [date] column in dbo.DATE_MASTER to be if the type date instead of datetime. Rename it FullDate to avoid confusion. Not absolutely necessary, but to preserve my sanity.
b) Add PK to the dbo.DATE_MASTER
alter table dbo.DATE_MASTER add constraint primary key pk_datemstr (dSeqKey);
c) In the table QP_HISTORY add column dSeqKey and populate it for matching QPDate dates.
d) Drop the QPDate column from the table.
e) Add PK and FK to the QP_HISTORY
alter table dbo.QP_HISTORY
add constraint pk_qphist primary key (Symbol, dSeqKey)
, add constraint fk1_qphist foreign key (dSeqKey)
references dbo.DATE_MASTER(dSeqKey) ;
f) Drop all those indexes mentioned at the end ouf your question, at least for the time being.
g) I do not see the size of the Symbol field. Define it as narrow as possible.
h) Needles to say, implement and test this on a development system first.