Will index be used when using OR clause in where - sql

I wrote a stored procedure with optional parameters.
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
SELECT columns
FROM table
WHERE (#startTime is NULL or table.StartTime >= #startTime) AND
(#endTIme is NULL or table.EndTime <= #endTime)
I'm wondering whether indexes on StartTime and EndTime will be used?

Yes they will be used (well probably, check the execution plan - but I do know that the optional-ness of your parameters shouldn't make any difference)
If you are having performance problems with your query then it might be a result of parameter sniffing. Try the following variation of your stored procedure and see if it makes any difference:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = #startTime
set #endTimeCopy = #endTime
SELECT columns
FROM table
WHERE (#startTimeCopy is NULL or table.StartTime >= #startTimeCopy) AND
(#endTimeCopy is NULL or table.EndTime <= #endTimeCopy)
This disables parameter sniffing (SQL server using the actual values passed to the SP to optimise it) - In the past I've fixed some weird performance issues doing this - I still can't satisfactorily explain why however.
Another thing that you might want to try is splitting your query into several different statements depending on the NULL-ness of your parameters:
IF #startTime is NULL
BEGIN
IF #endTime IS NULL
SELECT columns FROM table
ELSE
SELECT columns FROM table WHERE table.EndTime <= #endTime
END
ELSE
IF #endTime IS NULL
SELECT columns FROM table WHERE table.StartTime >= #startTime
ELSE
SELECT columns FROM table WHERE table.StartTime >= #startTime AND table.EndTime <= #endTime
BEGIN
This is messy, but might be worth a try if you are having problems - the reason it helps is because SQL server can only have a single execution plan per sql statement, however your statement can potentially return vastly different result sets.
For example, if you pass in NULL and NULL you will return the entire table and the most optimal execution plan, however if you pass in a small range of dates it is more likely that a row lookup will be the most optimal execution plan.
With this query as a single statement SQL server is forced to choose between these two options, and so the query plan is likely to be sub-optimal in certain situations. By splitting the query into several statements however SQL server can have a different execution plan in each case.
(You could also use the exec function / dynamic SQL to achieve the same thing if you preferred)

There is a great article to do with dynamic search criteria in SQL. The method I personally use from the article is the X=#X or #X IS NULL style with the OPTION (RECOMPILE) added at the end. If you read the article it will explain why
http://www.sommarskog.se/dyn-search-2008.html

Yes, based on the query provided indexes on or including the StartTime and EndTime columns can be used.
However, the [variable] IS NULL OR... makes the query not sargable. If you don't want to use an IF statement (because CASE is an expression, and can not be used for control of flow decision logic), dynamic SQL is the next alternative for performant SQL.
IF #startTime IS NOT NULL AND #endTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
AND endtime <= #endTime
END
ELSE IF #startTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE endtime <= #endTime
END
ELSE IF #endTIme IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
END
ELSE
BEGIN
SELECT columns
FROM TABLE
END

Dynamically changing searches based on the given parameters is a complicated subject and doing it one way over another, even with only a very slight difference, can have massive performance implications. The key is to use an index, ignore compact code, ignore worrying about repeating code, you must make a good query execution plan (use an index).
Read this and consider all the methods. Your best method will depend on your parameters, your data, your schema, and your actual usage:
Dynamic Search Conditions in T-SQL by by Erland Sommarskog
The Curse and Blessings of Dynamic SQL by Erland Sommarskog
The portion of the above articles that apply to this query is Umachandar's Bag of Tricks, but it is basically defaulting the parameters to some value to eliminate needing to use the OR. This will give the best index usage and overall performance:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = COALESCE(#startTime,'01/01/1753')
set #endTimeCopy = COALESCE(#endTime,'12/31/9999')
SELECT columns
FROM table
WHERE table.StartTime >= #startTimeCopy AND table.EndTime <= #endTimeCopy)

Probably not. Take a look at this blog posting from Tony Rogerson SQL Server MVP:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/05/17/444.aspx
You should at least get the idea that you need to test with credible data and examine the execution plans.

I don't think you can guarantee that the index will be used. It will depend a lot on the size of the table, the columns you are showing, the structure of the index and other factors.
Your best bet is to use SQL Server Management Studio (SSMS) and run the query, and include the "Actual Execution Plan". Then you can study that and see exactly which index or indices were used.
You'll often be surprised by what you find.
This is especially true if there in an OR or IN in the query.

Related

Big difference in Estimated and Actual rows when using a local variable

This is my first post on Stackoverflow so I hope I'm correctly following all protocols!
I'm struggling with a stored procedure in which I create a table variable and filling this table with an insert statement using an inner join. The insert itself is simple, but it gets complicated because the inner join is done on a local variable. Since the optimizer doesn't have statistics for this variable my estimated row count is getting srewed up.
The specific piece of code that causes trouble:
declare #minorderid int
select #minorderid = MIN(lo.order_id)
from [order] lo with(nolock)
where lo.order_datetime >= #datefrom
insert into #OrderTableLog_initial
(order_id, order_log_id, order_id, order_datetime, account_id, domain_id)
select ot.order_id, lol.order_log_id, ot.order_id, ot.order_datetime, ot.account_id, ot.domain_id
from [order] ot with(nolock)
inner join order_log lol with(nolock)
on ot.order_id = lol.order_id
and ot.order_datetime >= #datefrom
where (ot.domain_id in (1,2,4) and lol.order_log_id not in ( select order_log_id
from dbo.order_log_detail lld with(nolock)
where order_id >= #minorderid
)
or
(ot.domain_id = 3 and ot.order_id not IN (select order_id
from dbo.order_log_detail_spa llds with(nolock)
where order_id >= #minorderid
)
))
order by lol.order_id, lol.order_log_id
The #datefrom local variable is also declared earlier in the stored procedure:
declare #datefrom datetime
if datepart(hour,GETDATE()) between 4 and 9
begin
set #datefrom = '2011-01-01'
end
else
begin
set #datefrom = DATEADD(DAY,-2,GETDATE())
end
I've also tested this with a temporary table in stead of a table variable, but nothing changes. However, when I replace the local variable >= #datefrom with a fixed datestamp then my estimates and actuals are almost the same.
ot.order_datetime >= #datefrom = SQL Sentry Plan Explorer
ot.order_datetime >= '2017-05-03 18:00:00.000' = SQL Sentry Plan Explorer
I've come to understand that there's a way to fix this by turning this code into a dynamic sp, but I'm not sure how to do this. I would be grateful if someone could give me suggestions on how to do this. Maybe I have to use a complete other approach? Forgive me if I forgot something to mention, this is my first post.
EDIT:
MSSQL version = 11.0.5636
I've also tested with trace flag 2453, but with no success
Best regards,
Peter
Indeed, the behavior what you are experiencing is because the variables. SQL Server won't store an execution plan for each and every possible inputs, thus for some queries the execution plan may or may not optimal.
To answer your explicit question: You'll have to create a varchar variable and build the query as a string, then execute it.
Some notes before the actual code:
This can be prone to SQL injection (in general)
SQL Server will store the plans separately, meaning they will use more memory and possibly knock out other plans from the cache
Using an imaginary setup, this is what you want to do:
DECLARE #inputDate DATETIME2 = '2017-01-01 12:21:54';
DELCARE #dynamiSQL NVARCHAR(MAX) = CONCAT('SELECT col1, col2 FROM MyTable WHERE myDateColumn = ''', FORMAT(#inputDate, 'yyyy-MM-dd HH:mm:ss'), ''';');
INSERT INTO #myTableVar (col1, col2)
EXEC sp_executesql #stmt = #dynamicSQL;
As an additional note:
you can try to use EXISTS and NOT EXISTS instead of IN and NOT IN.
You can try to use a temp table (#myTempTable) instead of a local variable and put some indexes on it. Physical temp tables can perform better with large amount of data and you can put indexes on it. (For more info you can go here: What's the difference between a temp table and table variable in SQL Server? or to the official documentation)

Stored procedure execution taking long because of function used inside

In SQL Server 2012 I have the following user defined function:
CREATE FUNCTION [dbo].[udfMaxDateTime]()
RETURNS datetime
AS
BEGIN
RETURN '99991231';
END;
This is then being used in a stored procedure like so:
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP
BY
COL1
);
Now, if I run the stored procedure with the above code, it takes around 12 seconds to execute. (1,2 million rows)
If I change the WHERE clauses to ValidTo = '99991231' then, the stored procedure runs in under 1 second and it runs in Parallel.
Could anyone try and explain why this is happening ?
It is not because of the user-defined function, it is because of the variable.
When you use a variable #MaxDate in the DELETE query optimizer doesn't know the value of this variable when generating the execution plan. So, it generates a plan based on available statistics on the ValidTo column and some built-in heuristics rules for cardinality estimates when you have an equality comparison in a query.
When you use a literal constant in the query the optimizer knows its value and can generate a more efficient plan.
If you add OPTION(RECOMPILE) the execution plan would not be cached and would be always regenerated and all parameter values would be known to the optimizer. It is quite likely that query will run fast with this option. This option does add a certain overhead, but it is noticeable only when you run a query very often.
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP BY
COL1
)
OPTION(RECOMPILE);
I highly recommend to read Slow in the Application, Fast in SSMS by Erland Sommarskog.

Using while loop in T-SQL function

Non-database programmer here. It happens so, that I need to create a function in T-SQL which returns workdays count between given dates. I believe that the easiest how it's done is with while loop. Problem is, that as soon as I write something like
while #date < #endDate
begin
end
the statement won't execute, claiming "incorrect syntax near the keyword 'return'" (not very helpful). Where's the problem?
P.S. Full code:
ALTER FUNCTION [dbo].[GetNormalWorkdaysCount] (
#startDate DATETIME,
#endDate DATETIME
)
RETURNS INT
AS
BEGIN
declare #Count INT,
#CurrDate DATETIME
set #CurrDate = #startDate
while (#CurrDate < #endDate)
begin
end
return #Count
END
GO
Unlike some languages, the BEGIN/END pair in SQL Server cannot be empty - they must contain at least one statement.
As to your actual problem - you've said you're not a DB programmer. Most beginners to SQL tend to go down the same route - trying to write procedural code to solve the problem.
Whereas, SQL is a set-based language - it's usually better to find a set-based solution, rather than using loops.
In this instance, a calendar table would be a real help. Such a table contains one row for each date, and additional columns indicating useful information for your business (e.g. what you consider to be a working day). It then makes your query for working days look like:
SELECT COUNT(*) from Calendar
where BaseDate >= #StartDate and BaseDate < #EndDate and IsWorkingDay = 1
Populating the Calendar table becomes a one off exercise, and you can populate it with e.g. 30 years worth of dates easily.
Using any loop within SQL server is never a good idea :)
There are few better solutions, referring to one presented on StackOverflow already.

SQL Server 2008 Stored proc - Optimizer thinks my parameter is nullable

Optimizer seems to be getting confused about the null-ability of a varchar parameter and I'm not sure I understand why. I'm using SQL Server 2008 btw. All columns being queried are indexed. The TDate column is a clustered, partitioned index. The FooValue column is indexed, non-nullable column.
Example:
CREATE PROCEDURE dbo.MyExample_sp #SDate DATETIME, #EDate DATETIME, #FooValue VARCHAR(50)
AS
SET NOCOUNT ON
--To avoid parameter spoofing / sniffing
DECLARE #sDate1 DATETIME, #eDate1 DATETIME
SET #sDate1 = #sDate
SET #eDate1 = #eDate
SELECT
fd.Col1,
fd.Col2,
fd.TDate,
fl.FooValue,
fd.AccountNum
FROM dbo.FooData fd
INNER JOIN dbo.FooLookup fl
ON fl.FL_ID = fd.FL_ID
WHERE fd.TDate >= #sDate1
AND fd.TDate < #eDate1
AND fl.FooValue = #FooValue
Running this as a query works as expected. All indexes are seeks, no spoofing etc. Running this by executing the sproc takes 20 times longer - same query - same parameters. However, if I make the following change (very last line) everything works again.
CREATE PROCEDURE dbo.MyExample_sp #SDate DATETIME, #EDate DATETIME, #FooValue VARCHAR(50)
AS
SET NOCOUNT ON
--To avoid parameter spoofing / sniffing
DECLARE #sDate1, #eDate1
SET #sDate1 = #sDate
SET #eDate1 = #eDate
SELECT
fd.Col1,
fd.Col2,
fd.TDate,
fl.FooValue,
fd.AccountNum
FROM dbo.FooData fd
INNER JOIN dbo.FooLookup fl
ON fl.FL_ID = fd.FL_ID
WHERE fd.TDate >= #sDate1
AND fd.TDate < #eDate1
AND fl.FooValue = ISNULL(#FooValue, 'testthis')
It's like the optimizer is getting confused about whether the parameter is nullable or not? Also, adding a default value to the parameter doesn't make any difference. It still takes forever for the sproc to run unless I use = isnull(#parameter, 'some constant')
I'm happy I figured this out. But, I'd like to understand why this is happening and if there was a more elegant way to resolve the issue.
Re: Nullable variables
There is no concept of nullable for variables in T-SQL, the way that you can define a variable as nullable in c# using the ?.
If you have a parameter in a stored procedure, the end user can pass whatever he or she wants into the stored procedure, be it a real value or a null.
Re: the query plan
The query plan that will get cached is the query plan that gets generated upon the first time you call this stored procedure.. so if you passed in a null for #FooValue the very first time you ran it, then it will be optimized for #FooValue = null.
There is an OPTIMIZE FOR hint that you can use to optimize the query for some other value:
Or you can use WITH RECOMPILE, which will force the query plan to get regenerated on every run of the stored procedure.
Obviously there are trade-offs when using these types of hints, so make sure you understand them before using them.

Does SQL Server optimize DATEADD calculation in select query?

I have a query like this on Sql Server 2008:
DECLARE #START_DATE DATETIME
SET #START_DATE = GETDATE()
SELECT * FROM MY_TABLE
WHERE TRANSACTION_DATE_TIME > DATEADD(MINUTE, -1440, #START_DATE)
In the select query that you see above, does SqlServer optimize the query in order to not calculate the DATEADD result again and again. Or is it my own responsibility to store the DATEADD result on a temp variable?
SQL Server functions that are considered runtime constants are evaluated only once. GETDATE() is such a function, and DATEADD(..., constant, GETDATE()) is also a runtime constant. By leaving the actual function call inside the query you let the optimizer see what value will actually be used (as opposed to a variable value sniff) and then it can adjust its cardinality estimations accordingly, possibly coming up with a better plan.
Also read this: Troubleshooting Poor Query Performance: Constant Folding and Expression Evaluation During Cardinality Estimation.
#Martin Smith
You can run this query:
set nocount on;
declare #known int;
select #known = count(*) from sysobjects;
declare #cnt int = #known;
while #cnt = #known
select #cnt = count(*) from sysobjects where getdate()=getdate()
select #cnt, #known;
In my case after 22 seconds it hit the boundary case and the loop exited. The inportant thing is that the loop exited with #cnt zero. One would expect that if the getdate() is evaluated per row then we would get a #cnt different from the correct #known count, but not 0. The fact that #cnt is zero when the loop exists shows each getdate() was evaluated once and then the same constant value was used for every row WHERE filtering (matching none). I am aware that one positive example does not prove a theorem, but I think the case is conclusive enough.
Surprisingly, I've found that using GETDATE() inline seems to be more efficient than performing this type of calculation beforehand.
DECLARE #sd1 DATETIME, #sd2 DATETIME;
SET #sd1 = GETDATE();
SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, #sd1)
SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, GETDATE())
SET #sd2 = DATEADD(MINUTE, -1440, #sd1);
SELECT * FROM dbo.table
WHERE datetime_column > #sd2;
If you check the plans on those, the middle query will always come out with the lowest cost (but not always the lowest elapsed time). Of course it may depend on your indexes and data, and you should not make any assumptions based on one query that the same pre-emptive optimization will work on another query. My instinct would be to not perform any calculations inline, and instead use the #sd2 variation above... but I've learned that I can't trust my instinct all the time and I can't make general assumptions based on behavior I experience in particular scenarios.
It will be executed just once. You can double check it by checking execution plan ("Compute Scalar"->Estimated Number of execution == 1)