Recently we have run into performance issues with a particular query on SQL Server (2016). The problem I'm seeing is that the performance issues are incredibly inconsistent and I'm not sure how to improve this.
The table details:
CREATE TABLE ContactRecord
(
ContactSeq BIGINT NOT NULL
, ApplicationCd VARCHAR(2) NOT NULL
, StartDt DATETIME2 NOT NULL
, EndDt DATETIME2
, EndStateCd VARCHAR(3)
, UserId VARCHAR(10)
, UserTypeCd VARCHAR(2)
, LineId VARCHAR(3)
, CallingLineId VARCHAR(20)
, DialledLineId VARCHAR(20)
, ChannelCd VARCHAR(2)
, SubChannelCd VARCHAR(2)
, ServicingAgentCd VARCHAR(7)
, EucCopyTimestamp VARCHAR(30)
, PRIMARY KEY (ContactSeq)
, FOREIGN KEY (ApplicationCd) REFERENCES ApplicationType(ApplicationCd)
, FOREIGN KEY (EndStateCd) REFERENCES EndStateType(EndStateCd)
, FOREIGN KEY (UserTypeCd) REFERENCES UserType(UserTypeCd)
)
CREATE TABLE TransactionRecord
(
TransactionSeq BIGINT NOT NULL
, ContactSeq BIGINT NOT NULL
, TransactionTypeCd VARCHAR(3) NOT NULL
, TransactionDt DATETIME2 NOT NULL
, PolicyId VARCHAR(10)
, ProductId VARCHAR(7)
, EucCopyTimestamp VARCHAR(30)
, Detail VARCHAR(1000)
, PRIMARY KEY (TransactionSeq)
, FOREIGN KEY (ContactSeq) REFERENCES ContactRecord(ContactSeq)
, FOREIGN KEY (TransactionTypeCd) REFERENCES TransactionType(TransactionTypeCd)
)
Current record counts:
ContactRecord 20million
TransactionRecord 90million
My query is:
select
UserId,
max(StartDt) as LastLoginDate
from
ContactRecord
where
ContactSeq in
(
select
ContactSeq
from
TransactionRecord
where
ContactSeq in
(
select
ContactSeq
from
ContactRecord
where
UserId in
(
'1234567890',
'1234567891' -- Etc.
)
)
and TransactionRecord.TransactionTypeCd not in
(
'122'
)
)
and ApplicationCd not in
(
'1',
'4',
'5'
)
group by
UserId;
Now the query isn't great and could be improved using joins, however it does fundamentally work.
The problem I'm having is that our data job takes an input of roughly 7100 userIds. These are then broken up into groups of 500. For each 500 these are used in the IN clause in this query. The first 14 executions of this query with 500 items in the IN clause execute fine. Results are returned in roughly 15-20 seconds for each.
The issue is with the remaining 100 give or take for the last execution of this query. It never seems to complete. It just hangs. In our data job it is timing out after 10 minutes. I have no idea why. I'm not an expert with SQL Server so I'm not really sure how to debug this. I have executed each sub query independently and then replaced the contents of the sub query with the returned data. Doing this for each sub query works fine.
Any help is really appreciated here as I'm at a loss to how this works so consistently with larger amounts of parameters but just doesn't work with only a fraction.
EDIT
I've got three example of execution plans here. Please note that each of these were executed on a test server and all executed almost instantly as there is very little data on this test equivalent.
This is the execution plan for 500 arguments which executes fine in production, returning in roughly 15-20 seconds:
This is the execution plan for 119 arguments which is timing out in our data job after 10 minutes:
This is the execution plan for 5 arguments which executes fine. This query is not explicitly being executed in the data job but just for comparison:
In all instances SSMS has given the following warning:
/*
Missing Index Details from SQLQuery2.sql
The Query Processor estimates that implementing the following index could improve the query cost by 26.3459%.
*/
/*
USE [CloasIvr]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[TransactionRecord] ([TransactionTypeCd])
INCLUDE ([ContactSeq])
GO
*/
Is this the root cause to this problem?
Without seeing what's going on, it is hard to know exactly what's going on - especially with the ones that are failing. Execution plans for 'good' runs can help a bit but we're just guessing what's going wrong in bad runs.
My initial guess (similar to my comment) is that the estimates of what it expects is very wrong, and it creates a plan that is very bad.
Your TransactionRecord table in particular, with the detail column that is 1000 characters, would could have big issues with an unexpected large number of nested loops.
Indexes
The first thing I would suggest is indexing - particularly to a) only include a subset of the data you need for these, and b) to have them ordered in a useful manner.
I suggest the following two indexes would appear to help
CREATE INDEX IX_ContactRecord_User ON ContactRecord
(UserId, ContactSeq)
INCLUDE (ApplicationCD, Startdt);
CREATE INDEX IX_TransactionRecord_ContactSeq ON TransactionRecord
(ContactSeq, TransactionTypeCd);
These are both 'covering indexes', as well as being sorted in ways that can help.
Alternatively, you could replace the first one with a slightly modified version (sorting first on ContactSeq) but I think the above version would be more useful.
CREATE INDEX IX_ContactRecord_User2 ON ContactRecord
(ContactSeq)
INCLUDE (ApplicationCD, Startdt, UserId);
Also, regarding the index on TransactionRecord - if this is the only query that would be using that index, you could improve it by creating the following index instead
CREATE INDEX IX_TransactionRecord_ContactSeq_Filtered ON TransactionRecord
(ContactSeq, TransactionTypeCd)
WHERE (TransactionTypeCD <> '122');
The above is a filtered index that matches what's specified in the WHERE clause of your statement. The big thing about this is that it has already a) removed the records where the type <> '122', and b) has sorted the records already on ContactSeq so it's then easy to look them up.
By the way - given you asked about adding indexes on Foreign Keys on principle - the use of these really depends on how you read the data. If you are only ever referring to the referenced table (e.g., you have an FK to a status table, and only ever use it to report, in English, the statuses) then an index on the original table's Status_ID wouldn't help. On the other hand, if you want to find all the rows with Status_ID = 4, then it would help.
To help understanding indexes, I strongly recommend Brent Ozar's How to think like an SQL Server Engine - it really helped me to understand how indexes work in practice.
Use a sorted temp table
This may help but is unlikely to be the primary fix. If you pre-load the relevant UserIDs into a temporary table (with a primary key on UserID) then it may help with the relevant JOIN. It may also be easier for you to modify each run rather than have to modify the middle of the query.
CREATE TABLE #Users (UserId VARCHAR(10) PRIMARY KEY);
INSERT INTO #Users (UserID) VALUES
('1234567890'),
('1234567891');
Then replace the middle section of your query with
where
ContactSeq in
(
select
ContactSeq
from
ContactRecord CR
INNER JOIN #Users U ON CR.UserID = U.UserID
)
and TransactionRecord.TransactionTypeCd not in
(
'122'
)
Simplify the query
I had a go at simplifying the query, and got it to this:
select CR.UserId,
max(CR.StartDt) as LastLoginDate
from ContactRecord CR
INNER JOIN TransactionRecord TR ON CR.ContactSeq = TR.ContactSeq
where TR.TransactionTypeCd not in ('122')
AND CR.ApplicationCd not in ('1', '4', '5')
AND CR.UserId in ('1234567890', '1234567891') -- etc
group by UserId;
or alternatively (with the temp table)
select CR.UserId,
max(CR.StartDt) as LastLoginDate
from ContactRecord CR
INNER JOIN #Users U ON CR.UserID = U.UserID
INNER JOIN TransactionRecord TR ON CR.ContactSeq = TR.ContactSeq
where TR.TransactionTypeCd not in ('122')
AND CR.ApplicationCd not in ('1', '4', '5')
group by UserId;
One advantage of simplifying the query, is that it also helps SQL Server get good estimates; which in turn help it get good execution plans.
Of course, you would need to test that the above returns exactly the same records in your circumstances - I don't have a data set to test on, so I cannot be 100% sure these simplified versions match the original.
Related
My situation is like that :
I have these tables:
CREATE TABLE [dbo].[HeaderResultPulser]
(
[Id] BIGINT IDENTITY (1, 1) NOT NULL,
[ReportNumber] CHAR(255) NOT NULL,
[ReportDescription] CHAR(255) NOT NULL,
[CatalogNumber] NCHAR(255) NOT NULL,
[WorkerName] NCHAR(255) DEFAULT ('') NOT NULL,
[LastCalibrationDate] DATETIME NOT NULL,
[NextCalibrationDate] DATETIME NOT NULL,
[MachineNumber] INT NOT NULL,
[EditTime] DATETIME NOT NULL,
[Age] NCHAR(255) DEFAULT ((1)) NOT NULL,
[Current] INT DEFAULT ((-1)) NOT NULL,
[Time] BIGINT DEFAULT ((-1)) NOT NULL,
[MachineName] NVARCHAR(MAX) DEFAULT ('') NOT NULL,
[BatchNumber] NVARCHAR(MAX) DEFAULT ('') NOT NULL,
CONSTRAINT [PK_HeaderResultPulser]
PRIMARY KEY CLUSTERED ([Id] ASC)
);
CREATE TABLE [dbo].[ResultPulser]
(
[Id] BIGINT IDENTITY (1, 1) NOT NULL,
[ReportNumber] CHAR(255) NOT NULL,
[BatchNumber] CHAR(255) NOT NULL,
[DateTime] DATETIME NOT NULL,
[Ocv] FLOAT(53) NOT NULL,
[OcvMin] FLOAT(53) NOT NULL,
[OcvMax] FLOAT(53) NOT NULL,
[Ccv] FLOAT(53) NOT NULL,
[CcvMin] FLOAT(53) NOT NULL,
[CcvMax] FLOAT(53) NOT NULL,
[Delta] BIGINT NOT NULL,
[DeltaMin] BIGINT NOT NULL,
[DeltaMax] BIGINT NOT NULL,
[CurrentFail] BIT DEFAULT ((0)) NOT NULL,
[NumberInTest] INT NOT NULL
);
For every row in HeaderResultPulser I have multiple rows in ResultPulser
my key is the [HeaderResultPulser].[ReportNumber] to get a list of data in ResultPulser, and for every a lot of row with the same [ResultPulser].[ReportNumber]
It has multiple [ResultPulser].[NumberInTest] values
For example: in the ResultPulser table the data can look like this:
ReportNumber | NumberInTest
-------------+-------------
0000006211 | 1
0000006211 | 2
0000006211 | 3
0000006211 | 4
0000006211 | 5
0000006211 | 6
0000006212 | 1
0000006212 | 2
0000006212 | 3
0000006212 | 4
0000006212 | 5
NumberInTest can be 200, 500, 10000 and sometime even more..
The report number column contains two the first 7 chars are a number of machine and the rest is an incrementing number.
For example, 0000006212 is [0000006][212] == [the machine number][the incrementing number]
My query for example :
select
[HeaderResultPulser].[ReportNumber],
max(NumberInTest) as TotalCells
from
ResultPulser, HeaderResultPulser
where
((([ResultPulser].[ReportNumber] like '0000006%' and
CONVERT(INT, SUBSTRING([ResultPulser].[ReportNumber], 8, LEN([ResultPulser].[ReportNumber]))) BETWEEN '211' AND '815')
and ([HeaderResultPulser].[ReportNumber] = [ResultPulser].[ReportNumber])))
group by
[HeaderResultPulser].[ReportNumber]
Actually I want to get all the rows on the machine number 0000006 that number was 211 to 815 (include both)
This query takes about 6-7 seconds
There is a lot of data (in the hundreds of millions and billions and in the future can be more and can be much more in table ResultPulser), and it can get Tens of thousands of rows in HeaderResultPulser table
And In getting receive I only receive on select a few hundred in the worst case a thousand or about two thousand if I want to go far... but (in numbers) to get the max(NumberInTest) from ResultPulser I take about (It can get to a few millions of rows)
There is any way to optimize my query? Or when It's so much data it's just must this time? (That just the way it is)
The way you are doing joins is no longer standard. It's also hard to read, and dangerous if you ever need to use left joins. Instead of joining this way:
select *
from T1, T2
where T1.column = T2.column
Use ANSI-92 join syntax instead:
select *
from T1
join T2 on T1.column = T2.column
You said that your "key" was ReportNumber. Why isn't that declared in your schema? It sounds like you want a unique constraint on HeaderResultPulser.ReportNumber, and a foreign key on the the ReportPulser table, such that ReportNumber references HeaderResultPulser (ReportNumber)
Since your report number column seems to contain two different values, your table is not in First Normal Form. This is making things difficult for you. Why not split the two parts of the "report number" into two different columns when the data is entered? This will significantly improve your query performance, because you no longer need to perform an expression against the data in the table at query time to separate the ReportNumber into atomic values.
Your comment says that the first 7 characters of the ReportNumber are the MachineNumber. But you already have MachineNumber in the HeaderReportPulser table. So why not just add a separate column for Increment? If you still need ReportNumber to exist as a column, you can make it a calculated column, as the concatenation of MachineNumber and Increment.
If you don't want to touch the "existing" schema, we can do a similar thing in reverse. Your query will not be completely sargable unless you can do something to the schema, because you have to perform some kind of expression on the data in the ReportNumber column. But maybe you have the option to use a calculated column to do this up front:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7);
Now we have the increment as a column in its own right. But it's still being calculated at query time, because it's not persisted. We can make it persisted:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7) persisted;
We can also index a computed column. Since your required expression is deterministic and precise (see Indexes on Computed Columns), we don't actually have to mark it as persisted:
alter table HeaderReportPulser
add Increment as right(ReportNumber, len(rtrim(ReportNumber)) - 7);
create index ix_headerreportpulser_increment on HeaderReportPulser(Increment);
You could do a similar set of operations to create the Increment and MachineNumber on the ReportPulser table. If you always want to use both values, create an index on the combination of (MachineNumber, Increment)
The biggest performance gain might be eliminating the outer group by by using a correlated subquery or lateral join:
select hrp.[ReportNumber],
(select max(rp.NumberInTest)
from ResultPulser rp
where rp.ReportNumber = hrp.ReportNumber and
right(rp.ReportNumber, 3) between '211' and '815'
) as TotalCells
from HeaderResultPulser hrp
where hrp.ReportNumber like '0000006%';
Your logic looks like it only wants the last three characters of the ReportNumber, so I simplified the logic. I'm not 100% that is the case -- it just seems reasonable. Regardless, there is no need to convert the values to integers and then compare as strings. And similar logic can be used even for longer report numbers.
You also want an index on ResultPulser(ReportNumber, NumberInTest) :
create index idx_resultpulser_reportnumber_numberintest on ResultPulser(ReportNumber, NumberInTest)
EDIT:
Actually, I notice that the report number matches between the two tables. So this seems simplest:
select hrp.[ReportNumber],
(select max(rp.NumberInTest)
from ResultPulser rp
where rp.ReportNumber = hrp.ReportNumber
) as TotalCells
from HeaderResultPulser hrp
where hrp.ReportNumber >= '0000006211' and
hrp.ReportNumber <= '0000006815';
You still want to be sure you have the above index on ResultPulser.
If the ReportNumber is not a fixed 10 digits, then you can use:
where hrp.ReportNumber >= '0000006211' and
hrp.ReportNumber <= '0000006815' and
len(hrp.ReportNumber) = 10
This should also use the index and return exactly what you want.
Performance Optimization of any query depends on many factors including environment you are hosting and running your query. Hardware and Software play important part in optimization of heavy running database queries. In your case you can look into following things:
USE ANSI 92 JOIN syntax instead of default cross join
e.g
select *
from T1
join T2 on T1.column = T2.column
Put indexes on columns like
[ReportNumber]
[NumberInTest]
Note: You may need index for each column in the join area which is not primary key.
Remember use of MAX is always heavy and that could be the main problem in your query.
Finally you can further look into optimizing your query syntax using following online tool where you can specify your actual query and environment you are using:
https://www.eversql.com/
Hope it help you.
If you really want to optimize performance, I propose to add a bit of logic beyond SQL structures.
Is it possible that particular value of ReportNumber is present in table ResultPulser, but not in table HeaderResultPulser? If not, and I ssupose so, there is no reason to join table HeaderResultPulser.
Then, I propose to take advantage from fact, that the condition on ReportNumber can be expressed equivalently without dividing in substrings. For your example, the condition
([ResultPulser].[ReportNumber] like '0000006%' and
CONVERT(INT, SUBSTRING([ResultPulser].[ReportNumber], 8,
LEN([ResultPulser].[ReportNumber]))) BETWEEN '211' AND '815')
is equivalent to:
([ResultPulser].[ReportNumber] BETWEEN '0000006211' and '0000006815')
So the proposal is:
Create index on table ResultPulser(ReportNumber, NumberInTest)
Use selections similar to this:
select ReportNumber, max(NumberInTest) as TotalCells
from ResultPulser
where
ReportNumber BETWEEN '0000006211' and '0000006815'
group by
ReportNumber
(Please, add brackets or double quotes and capitalizations as necessary for MS SQL Server and your taste)
I would expect that good database will execute this query by index-only access, and it will be optimal from execution point of view.
Performance depends on not only on execution path, but also on setup and hardware. Please, make sure that your database has enough cache and fast disk accesses. Also concurrent load is very important.
Simple splitting the field ReportNumber into [the machine number] and [the incrementing number] will probably not improve performance of the query in form proposed by me. But it may be very convenient for other forms of access (other WHERE classes). And it will reflect the structure of the case. Even more important: It will release you from imposed limits. Currently, you have 3 digits for the [the incrementing number]. Are you sure, it will never be necessary to have more than 999 of them for single [the machine number]?
Why the field ReportNumber has type char(255), when only 10 characters are used? char(255) has fixed length, so it will be terrible wasting of space. Only database compression can help. Used space has strong influence on performance – Please, consider the above remark about the database cache.
If both these fields, [the machine number], [the incrementing number], are intergers, why not split ReportNumber and use integer type for them?
Side remark: Field names suggest that you search the total number of rows in table ResultPulser, which belong to single entry in table HeaderResultPulser. The proposed query will deliver this, only if numbers in NumberInTest are consecutive, without gaps. If this is not supplied, you have to count them rather than seek the maximum.
I have a table with a VARCHAR column and an index on it. Whenever a SELECT COUNT(*) is done on this table that has a check for COLUMN = N'' OR COLUMN IS NULL it returns double the number of rows. SELECT * with the same where clause will return the correct number of records.
After reading this article: https://sqlquantumleap.com/2017/07/10/impact-on-indexes-when-mixing-varchar-and-nvarchar-types/ and doing some testing I believe the collation of the column and the implicit conversion isn't the fault (at least not directly). The collation of the column is Latin1_General_CI_AS.
The database is on SQL Server 2012, and I've tested on 2016 as well.
I've created a test script (below) that will demonstrate this problem. In doing so, I believe that it may be related to data paging, as it needed a bit of data in the table for it to occur.
CREATE TABLE [dbo].TEMP
(
ID [varchar](50) COLLATE Latin1_General_CI_AS NOT NULL,
[DATA] [varchar](200) COLLATE Latin1_General_CI_AS NULL,
[TESTCOLUMN] [varchar](50) COLLATE Latin1_General_CI_AS NULL,
CONSTRAINT [PK_TEMP] PRIMARY KEY CLUSTERED ([ID] ASC)
)
GO
CREATE NONCLUSTERED INDEX [I_TEMP_TESTCOLUMN] ON dbo.TEMP (TESTCOLUMN ASC)
GO
DECLARE #ROWS AS INT = 40;
WITH NUMBERS (NUM) AS
(
SELECT 1 AS NUM
UNION ALL
SELECT NUM + 1 FROM NUMBERS WHERE NUM < #ROWS
)
INSERT INTO TEMP (ID, DATA)
SELECT NUM, '1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901324561234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890'
FROM NUMBERS
SELECT #ROWS AS EXPECTED, COUNT(*) AS ACTUALROWS
FROM TEMP
GO
SELECT COUNT(*) AS INVALIDINDEXSEARCHCOUNT
FROM TEMP
WHERE (TESTCOLUMN = N'' OR TESTCOLUMN IS NULL)
GO
DROP TABLE TEMP
I'm able to modify the database to some extent (I won't be able to change data, or change the column from allowing NULL), unfortunately I am not able to modify the code doing the search, can anyone identify a way to get the correct COUNT(*) results returned?
TLDR: This is a bug in the product (reported here).
The poor practice that exposes this bug is mismatched datatypes (varchar column being compared to nvarchar) - on SQL collations this would just cause an implicit cast of the column to nvarchar and a full scan.
On Windows collations this can still result in a seek. This is generally a useful performance optimisation but here you have hit an edge case...
More Detail: use the below setup...
CREATE TABLE dbo.TEMP
(
ID INT IDENTITY PRIMARY KEY,
[TESTCOLUMN] [varchar](50) COLLATE Latin1_General_CI_AS NULL INDEX [I_TEMP_TESTCOLUMN],
Filler AS CAST('X' AS CHAR(8000)) PERSISTED
)
--Add 7 rows where TESTCOLUMN is NOT NULL
INSERT dbo.TEMP([TESTCOLUMN]) VALUES ('aardvark'), ('badger'),
('badges'), ('cat'),
('dog'), ('elephant'),
('zebra');
--Add 49 rows where TESTCOLUMN is NULL
INSERT dbo.TEMP([TESTCOLUMN])
SELECT NULL
FROM dbo.TEMP T1 CROSS JOIN dbo.TEMP T2
Then first look at the actual execution plan for
SELECT COUNT(*)
FROM dbo.TEMP
WHERE TESTCOLUMN = N'badger'
OPTION (RECOMPILE)
In SQL Collations the implicit cast to nvarchar would make the predicate entirely unsargable. With windows collations SQL Server is able to add the apparatus to the plan where the compute scalar calls an internal function GetRangeThroughConvert(N'badger',N'badger',(62)) and the resultant values end up being fed into a nested loops join to give start and end points for an index seek. (the article "Dynamic Seeks and Hidden Implicit Conversions" has some more details about this plan shape)
It is not exposed in the execution plan what the range start and end values are that this internal function returns but it is possible to see them if you happen to have a SQL Server build available where the short lived query_trace_column_values extended event has not been disabled. In the case above the function returns (badger, badgeS, 62) and these values are used in the index seek. As I added a row with the value "badges" in this case the seek ends up reading one more row than strictly necessary and the residual predicate retains only the one for "badger".
Now try
SELECT COUNT(*)
FROM dbo.TEMP
WHERE TESTCOLUMN = N''
OPTION (RECOMPILE)
The GetRangeThroughConvert function appears to give up when asked to provide a range for an empty string and output (null, null, 0).
The null here indicate that that end of the range is unbounded so effectively the index seek just ends up reading the whole index from first row to last.
the above shows the index seek read all 56 rows but the residual predicate did the job of removing all those not matching TESTCOLUMN = N'' (so the operator returns zero rows).
In general the seek predicate used here seems to act like a prefix search (e.g. the seek [TESTCOLUMN] = N'A' will read at least all rows starting with A with the residual predicate doing the equality check) so my expectations for empty string here would not be high in the first place but Paul White indicates that the range being seeked here is likely a bug anyway.
When you add the OR predicate to the query the execution plan changes.
It now ends up getting two outer rows to the nested loops join and so ends up doing two seeks (two executions of the seek operator on the inside of the nested loops).
One for the TESTCOLUMN = N'' case and one for the TESTCOLUMN IS NULL case. The values used for the TESTCOLUMN = N'' branch are still calculated through the GetRangeThroughConvert call (as this is the only way SQL Server can do a seek for this mismatched datatype case) so still have the expanded range including NULL.
The problem is that the residual predicate on the index seek now also changes.
It is now
CONVERT_IMPLICIT(nvarchar(50),[tempdb].[dbo].[TEMP].[TESTCOLUMN],0)=N''
OR [tempdb].[dbo].[TEMP].[TESTCOLUMN] IS NULL
The previous residual predicate of
CONVERT_IMPLICIT(nvarchar(50),[tempdb].[dbo].[TEMP].[TESTCOLUMN],0)=N''
would not be suitable as this would incorrectly remove the rows with NULL that need to be retained for the OR TESTCOLUMN IS NULL branch.
This means that when the seek for the N'' branch is done it still ends up reading all the rows with NULL as before but the residual predicate no longer is fit for purpose at removing these.
It might also seem a bit of a miss that the merge interval in the problem plan does not merge the overlapping ranges for the index seeks.
I assume this does not happen due to the different flags values from the two branches. Expr1014 has a value of 60 for the IS NULL branch and 0 for the = N'' branch.
In my test, which was on SQL 2019, when one removes the N and just compares against '' or null, the double counting goes away.
SELECT COUNT(*) AS ACTUALROWS
FROM TEMP
WHERE (TESTCOLUMN = '' OR TESTCOLUMN IS NULL)
The N identifier indicating Unicode is inappropriate anyway as the search column is not of type NVARCHAR. If test column were of type NVARCHAR, the count would be correct.
Eric Kassan's answer is correct:
The column in the table is VARCHAR, but you are searching as if the column is NVARCHAR.
These are two different datatypes, so the column should be changed to NVARCHAR, or the query should be changed by removing the N.
Why the result is doubled up when joining different datatypes is interesting, but that was not the question. :)
In SQLite I have a large DB (~35Mb).
It contains a table with the following syntax:
CREATE TABLE "log_temperature" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"date" datetime NOT NULL,
"temperature" varchar(20) NOT NULL
)
Now when I want to search for datas within a period, it is too slow on an embedded system:
$ time sqlite3 sample.sqlite "select MIN(id) from log_temperature where
date BETWEEN '2019-08-13 00:00:00' AND '2019-08-13 23:59:59';"
331106
real 0m2.123s
user 0m0.847s
sys 0m0.279s
Note1: ids are running from 210610 to 331600.
Note2: if I run 'SELECT id FROM log_temperature ORDER BY ASC LIMIT 1', it gives the exact same timing as with the 'MIN' function.
I want to have the 'real time of 0m2.123s' to be as close to 0m0.0s as possible.
What are my options for making this faster? (Without removing hundreds of thousands of data?)
ps.: embedded system parameters are not important here. This shall be solved by optimizing the query or the underlying schema.
First, I would recommend that you write the query as:
select MIN(id)
from log_temperature
where date >= '2019-08-13' and date < '2019-08-14';
This doesn't impact performance, but it makes the query easier to write -- and no need to fiddle with times.
Then, you want an index on (date, id):
create index idx_log_temperature_date_id on log_temperature(date, id);
I don't think id is needed in the index, if it is declared as the primary key of the table.
Can you create an index on the date?
CREATE INDEX index_name ON log_temperature(date);
I am having serious performance issues when using a nested loop in a WHERE clause.
When I run the below code as is, it takes several minutes. The trick is I'm using the WHERE clause to pull ALL data if the report_id is NULL, but only certain report_id's if I set them in the parameter string.
The function [fn_Parse_List] turns a VARCHAR string such as '123,456,789' into a table where each row is each number in integer form, which is then used in the IN clause.
When I run the code below with report_id = '456' (the dashed out portion), the code takes seconds, but passing the temporary table and using the SELECT statement in the WHERE clause kills it.
alter procedure dbo.p_revenue
(#report_id varchar(max) = NULL)
as
select cast(value as int) Report_ID
into #report_ID_Temp
from [fn_Parse_List] (#report_id)
SELECT *
FROM BIGTABLE
where #report_id is null
or a.report_id in (select Report_ID from #report_ID_Temp)
--Where #report_id is null or a.report_id in (456)
exec p_revenue #report_id = '456'
Is there a way to optimize this? I tried a JOIN with the table #report_ID_Temp, but it still takes just as long and doesn't work when the report_id is NULL.
You're breaking three different rules.
If you want two query plans, you need two queries: OR does not give you two query plans. IF does.
If you have a temporary table, make sure it has a primary key and any appropriate indexes. In your case, you need an ALTER TABLE statement to add the primary key clustered index. Or you can CREATE TABLE to declare the structure in the first place.
If you think fn_Parse_List is a good idea, you haven't read enough Sommarskog
If I were to write the Stored Procedure for your case, I would use a Table Valued Parameter (TVP) instead of passing multiple values as a comma-seperated string.
Something like the following:
-- Create a type for the TVP
CREATE TYPE REPORT_IDS_PAR AS TABLE(
report_id INT
);
GO
-- Use the TVP type instead of VARCHAR
CREATE PROCEDURE dbo.revenue
#report_ids REPORT_IDS_PAR READONLY
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT 1 FROM #report_ids)
SELECT
*
FROM
BIGTABLE;
ELSE
SELECT
*
FROM
#report_ids AS ids
INNER JOIN BIGTABLE AS bt ON
bt.report_id=ids.report_id;
-- OPTION(RECOMPILE) -- see remark below
END
GO
-- Execute the Stored Procedure
DECLARE #ids REPORT_IDS_PAR;
-- Empty table for all rows:
EXEC dbo.revenue #ids;
-- Specific report_id's for specific rows:
INSERT INTO #ids(report_id)VALUES(123),(456),(789);
EXEC dbo.revenue #ids;
GO
If you run this procedure with a TVP with a lot of rows or a wildly varying number of rows, I suggest you add the option OPTION(RECOMPILE) to the query.
I see 2 possible things that could help improve performance. Depends on which part is taking the longest. First off, SELECT INTO is a single threaded operation until SQL Server 2014. If this is taking a long time, create an explicitly defined temp table with CREATE TABLE. Secondly, depending on the number of records inserted into the temp table, you probably need an index on the Report_ID column. That can all be done in the body of the stored procedure. If you do end up using an explicitly defined temp table, I would create the index after the data is loaded.
If that doesn't help, first check that the report_id column on the BIGTABLE is indexed. Then try splitting the select into 2 and combining with a UNION ALL like this:
ALTER PROCEDURE dbo.p_revenue
(
#report_id VARCHAR(MAX) = NULL
)
AS
SELECT CAST(value AS INT) Report_ID
INTO #report_ID_Temp
FROM fn_Parse_List(#report_id);
SELECT *
FROM BIGTABLE
WHERE #report_id IS NULL
UNION ALL
SELECT *
FROM BIGTABLE
WHERE a.report_id IN ( SELECT Report_ID
FROM #report_ID_Temp );
GO
EXEC p_revenue #report_id = '456';
Are you saying I should have two queries, one where it pulls if the report_id doesn't exists and one where there is a list of report_ids?
Yes, yes, yes. The fact, that it somehow works when You enter the numbers directly, distracts You from the core problem. You need table scan when #report_id is null and index seek when it is not and You can not have both in one execution plan. The performance would inevitably have to suffer, one way or another.
I would prefer not to, as the table i'm pulling from is actually a
view with 800 lines with an additional parameter not shown above.
I do not see where is the problem, SELECT * FROM BIGTABLE and SELECT * FROM BIGVIEW seems the same. If You need parameters You can use inline table valued function. If You have more parameters with variable selectivity like #report_id, I guess You would end up with dynamic sql anyway, sooner or later.
UNION ALL as proposed by #db_brad would help, but one of those subquery is executed even when there is no need for it.
As a quick patch You can append OPTION(RECOMPILE) to the SELECT and have table scan one time and index seek the other time, but recompiling every time would induce nontrivial overhead.
Here is my query -- the last query is what is causing me pain:
The address.postcode field is a varchar(14) and you can see the input format the user sends in.
DECLARE #ZipCode NVARCHAR(MAX) = ('06409;06471;11763;06443;06371;11949;11946;11742')
IF OBJECT_ID('tempdb..#ZipCodes') IS NOT NULL DROP TABLE #ZipCodes;
CREATE TABLE #ZipCodes (
Zipcode NVARCHAR(6)
)
INSERT INTO #ZipCodes ( Zipcode )
SELECT zip.Token + '%'
FROM DMS.fn_SplitList(#ZipCode, ';') zip
CREATE NONCLUSTERED INDEX [idx_Zip] ON #ZipCodes (Zipcode)
IF OBJECT_ID('tempdb..#ZipCodesConstituents') IS NOT NULL DROP TABLE #ZipCodesConstituents;
CREATE TABLE #ZipCodesConstituents (
ConstituentID UNIQUEIDENTIFIER
, PostCode NVARCHAR(12)
)
CREATE NONCLUSTERED INDEX [idx_ZipCodesConstituents] ON #ZipCodesConstituents (ConstituentID, PostCode)
INSERT INTO #ZipCodesConstituents ( ConstituentID, PostCode )
SELECT a.CONSTITUENTID
, a.POSTCODE
FROM #ZipCodes zip
JOIN DMS.address a
ON a.POSTCODE LIKE zip.Zipcode
where a.ISPRIMARY = 1
I am trying to attach the execution plan -- but not having any luck...
Basically the section of code has an Est Cost of 61.9%
and the Sort is 61.5%
I tried to evaluate the behavior, but on my tests, I can't force a sort operator in the last INSERT. But what I see are the following two issues.
You create the index, and afterwards insert into the table. This may be harmful. Not in all cases, but it may force the sort in your query, as SQL Server tries to ensure the right order during the insert to help the index at its reorganize.
Your using UNIQUEIDENTIFIER as your primary keys. This may be useful in a way, but I think in your case a simple IDENTITY(1,1) column would be enough, doesn't it? The UNIQUEIDENTIFIER in your index, will heavily force a fragmentation. It might not be the best idea to solve it this way.
I tested both variants with a test-set of 100.000 rows.
These are my results in measurement of the execution costs:
In all cases, the costs are significantly lower, if the Index is created after the INSERT of #ZipCodesConstituents.
The usage of an INDENTITY instead of an UNIQUEIDENTIFIER will additionally boost the performance.
It would be wise to add a Index on your Address table, if your running this sort of query more often.
Here are the measurements in cost-points (cp) - the lower the better:
UniqueIdentifier + Index before: 20 cp for the Insert.
UniqueIdentifier + Index after Insert: 8 cp for the Insert + 7 for the Index (where the SORT occurs).
Identity + Index before: 18 cp for the Insert
Identity + Index after: 7 cp for the Insert + 6 for the Index
Identity + Index after + Index on Address: 3 for the Insert + 6 for the Index
And the winner is: Identity Column + Index and maybe an Index on your Address too.
The index I used to boost both of the statements (UniqueIdentifier and Identity) is this one:
CREATE NONCLUSTERED INDEX [NCI_Adress_IsPrimary_Postcode]
ON Address ([IsPrimary],[PostCode])
INCLUDE ([Constituentid])
In my testcase it took 13 cp to build it. If you just use this once, it won't be helpful! If you use this statement quite often or even sometimes a day/week it may be useful for you.
Hopefully this will solve your problems.