SQL Server 2008 Stored proc - Optimizer thinks my parameter is nullable - sql

Optimizer seems to be getting confused about the null-ability of a varchar parameter and I'm not sure I understand why. I'm using SQL Server 2008 btw. All columns being queried are indexed. The TDate column is a clustered, partitioned index. The FooValue column is indexed, non-nullable column.
Example:
CREATE PROCEDURE dbo.MyExample_sp #SDate DATETIME, #EDate DATETIME, #FooValue VARCHAR(50)
AS
SET NOCOUNT ON
--To avoid parameter spoofing / sniffing
DECLARE #sDate1 DATETIME, #eDate1 DATETIME
SET #sDate1 = #sDate
SET #eDate1 = #eDate
SELECT
fd.Col1,
fd.Col2,
fd.TDate,
fl.FooValue,
fd.AccountNum
FROM dbo.FooData fd
INNER JOIN dbo.FooLookup fl
ON fl.FL_ID = fd.FL_ID
WHERE fd.TDate >= #sDate1
AND fd.TDate < #eDate1
AND fl.FooValue = #FooValue
Running this as a query works as expected. All indexes are seeks, no spoofing etc. Running this by executing the sproc takes 20 times longer - same query - same parameters. However, if I make the following change (very last line) everything works again.
CREATE PROCEDURE dbo.MyExample_sp #SDate DATETIME, #EDate DATETIME, #FooValue VARCHAR(50)
AS
SET NOCOUNT ON
--To avoid parameter spoofing / sniffing
DECLARE #sDate1, #eDate1
SET #sDate1 = #sDate
SET #eDate1 = #eDate
SELECT
fd.Col1,
fd.Col2,
fd.TDate,
fl.FooValue,
fd.AccountNum
FROM dbo.FooData fd
INNER JOIN dbo.FooLookup fl
ON fl.FL_ID = fd.FL_ID
WHERE fd.TDate >= #sDate1
AND fd.TDate < #eDate1
AND fl.FooValue = ISNULL(#FooValue, 'testthis')
It's like the optimizer is getting confused about whether the parameter is nullable or not? Also, adding a default value to the parameter doesn't make any difference. It still takes forever for the sproc to run unless I use = isnull(#parameter, 'some constant')
I'm happy I figured this out. But, I'd like to understand why this is happening and if there was a more elegant way to resolve the issue.

Re: Nullable variables
There is no concept of nullable for variables in T-SQL, the way that you can define a variable as nullable in c# using the ?.
If you have a parameter in a stored procedure, the end user can pass whatever he or she wants into the stored procedure, be it a real value or a null.
Re: the query plan
The query plan that will get cached is the query plan that gets generated upon the first time you call this stored procedure.. so if you passed in a null for #FooValue the very first time you ran it, then it will be optimized for #FooValue = null.
There is an OPTIMIZE FOR hint that you can use to optimize the query for some other value:
Or you can use WITH RECOMPILE, which will force the query plan to get regenerated on every run of the stored procedure.
Obviously there are trade-offs when using these types of hints, so make sure you understand them before using them.

Related

Big difference in Estimated and Actual rows when using a local variable

This is my first post on Stackoverflow so I hope I'm correctly following all protocols!
I'm struggling with a stored procedure in which I create a table variable and filling this table with an insert statement using an inner join. The insert itself is simple, but it gets complicated because the inner join is done on a local variable. Since the optimizer doesn't have statistics for this variable my estimated row count is getting srewed up.
The specific piece of code that causes trouble:
declare #minorderid int
select #minorderid = MIN(lo.order_id)
from [order] lo with(nolock)
where lo.order_datetime >= #datefrom
insert into #OrderTableLog_initial
(order_id, order_log_id, order_id, order_datetime, account_id, domain_id)
select ot.order_id, lol.order_log_id, ot.order_id, ot.order_datetime, ot.account_id, ot.domain_id
from [order] ot with(nolock)
inner join order_log lol with(nolock)
on ot.order_id = lol.order_id
and ot.order_datetime >= #datefrom
where (ot.domain_id in (1,2,4) and lol.order_log_id not in ( select order_log_id
from dbo.order_log_detail lld with(nolock)
where order_id >= #minorderid
)
or
(ot.domain_id = 3 and ot.order_id not IN (select order_id
from dbo.order_log_detail_spa llds with(nolock)
where order_id >= #minorderid
)
))
order by lol.order_id, lol.order_log_id
The #datefrom local variable is also declared earlier in the stored procedure:
declare #datefrom datetime
if datepart(hour,GETDATE()) between 4 and 9
begin
set #datefrom = '2011-01-01'
end
else
begin
set #datefrom = DATEADD(DAY,-2,GETDATE())
end
I've also tested this with a temporary table in stead of a table variable, but nothing changes. However, when I replace the local variable >= #datefrom with a fixed datestamp then my estimates and actuals are almost the same.
ot.order_datetime >= #datefrom = SQL Sentry Plan Explorer
ot.order_datetime >= '2017-05-03 18:00:00.000' = SQL Sentry Plan Explorer
I've come to understand that there's a way to fix this by turning this code into a dynamic sp, but I'm not sure how to do this. I would be grateful if someone could give me suggestions on how to do this. Maybe I have to use a complete other approach? Forgive me if I forgot something to mention, this is my first post.
EDIT:
MSSQL version = 11.0.5636
I've also tested with trace flag 2453, but with no success
Best regards,
Peter
Indeed, the behavior what you are experiencing is because the variables. SQL Server won't store an execution plan for each and every possible inputs, thus for some queries the execution plan may or may not optimal.
To answer your explicit question: You'll have to create a varchar variable and build the query as a string, then execute it.
Some notes before the actual code:
This can be prone to SQL injection (in general)
SQL Server will store the plans separately, meaning they will use more memory and possibly knock out other plans from the cache
Using an imaginary setup, this is what you want to do:
DECLARE #inputDate DATETIME2 = '2017-01-01 12:21:54';
DELCARE #dynamiSQL NVARCHAR(MAX) = CONCAT('SELECT col1, col2 FROM MyTable WHERE myDateColumn = ''', FORMAT(#inputDate, 'yyyy-MM-dd HH:mm:ss'), ''';');
INSERT INTO #myTableVar (col1, col2)
EXEC sp_executesql #stmt = #dynamicSQL;
As an additional note:
you can try to use EXISTS and NOT EXISTS instead of IN and NOT IN.
You can try to use a temp table (#myTempTable) instead of a local variable and put some indexes on it. Physical temp tables can perform better with large amount of data and you can put indexes on it. (For more info you can go here: What's the difference between a temp table and table variable in SQL Server? or to the official documentation)

SELECT hangs when using a variable

SQL Server 2014 (v13.0.4001.0) - this sample script hangs:
DECLARE #from int = 0
DECLARE #to int = 1000
select
*
from
TaskNote dtn
join
Participants tp on dtn.Task_ID = tp.TaskId
where
dtn.TaskNote_ID between #from and #to
But if I change variables to constants - it is all OK.
Like this:
where
dtn.DocTaskNote_ID between 0 and 1000
Also, if I remove the join, all is ok.
Can't figure out where the problem is
A possible cause for the problem you mention, in case your query lies within a stored procedure, is parameter sniffing. SQL Server compiles the query for the first time using the initial values of the parameters. In subsequent calls to the procedure the engine uses the cached execution plan which is probably not optimal for the current variable values.
One workaround this problem is to use OPTION (RECOMPILE):
select *
from TaskNote dtn
join Participants tp on dtn.Task_ID = tp.TaskId
where dtn.TaskNote_ID between #from and #to
option (recompile)
This way the query is being compiled every time the procedure is executed using the current parameters values.
Further reading:
Parameter Sniffing Problem and Possible Workarounds

Stored procedure execution taking long because of function used inside

In SQL Server 2012 I have the following user defined function:
CREATE FUNCTION [dbo].[udfMaxDateTime]()
RETURNS datetime
AS
BEGIN
RETURN '99991231';
END;
This is then being used in a stored procedure like so:
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP
BY
COL1
);
Now, if I run the stored procedure with the above code, it takes around 12 seconds to execute. (1,2 million rows)
If I change the WHERE clauses to ValidTo = '99991231' then, the stored procedure runs in under 1 second and it runs in Parallel.
Could anyone try and explain why this is happening ?
It is not because of the user-defined function, it is because of the variable.
When you use a variable #MaxDate in the DELETE query optimizer doesn't know the value of this variable when generating the execution plan. So, it generates a plan based on available statistics on the ValidTo column and some built-in heuristics rules for cardinality estimates when you have an equality comparison in a query.
When you use a literal constant in the query the optimizer knows its value and can generate a more efficient plan.
If you add OPTION(RECOMPILE) the execution plan would not be cached and would be always regenerated and all parameter values would be known to the optimizer. It is quite likely that query will run fast with this option. This option does add a certain overhead, but it is noticeable only when you run a query very often.
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP BY
COL1
)
OPTION(RECOMPILE);
I highly recommend to read Slow in the Application, Fast in SSMS by Erland Sommarskog.

SQL Variables Change in SSIS

I'm new to SSIS so forgive me if this question is trivial or has already been answered. So I have a SQL query that begins as follows:
declare #start datetime, #end datetime, #startMonth datetime, #endMonth datetime, #maxHoursToRespond int
----Set These------------------------------------------------
set #end='6/27/2014'
set #maxHoursToRespond=24
-------------------------------------------------------------
set #start=dateadd(dd, -90, #end) -- set duplication period
set #startMonth=dateadd(dd, -2, #end)-- set to start date of output you want
set #endMonth=dateadd(dd, -1, #end) -- set to day of end date of output you want
When I put this in my OLE DB Source Editor with SQL command as my data access mode,
all the variables are replaced with question marks.
It looks like :
DECLARE ? datetime, ? datetime, ? datetime, ? datetime, ? int
/*--Set These------------------------------------------------*/ SET ? = ?
SET ? = 24
/*-----------------------------------------------------------*/ SET ? = dateadd(dd, - 90, ?)
SET ? = dateadd(dd, - 2, ?)
SET ? = dateadd(dd, - 1, ?)
In the query builder.I'd like to know why this is happening.I'd also like to know how I can allow the query to be successfully built
(currently I get a syntax error of "The Declare SQL construct or statement is not supported.").
Do I have to create these variables (like #start) in SSIS itself?
You can:
Encapsulate your SQL in a stored procedure that contains all of the declarations internally (assuming the variable values are static), then call the stored procedure in the execute SQL task with EXEC. EXEC My_Stored_Procedure
Write a stored procedure that accepts the variables as inputs, map them to variables in SSIS, then execute the stored procedure like this Exec My_Stored_Proc ?,?,?,? with the user variables mapped to the corresponding stored procedure variables.
Leave the query as is, but remove the DECLARE and the SETs, and map the SSIS variables to the query. This most likely will not work, because SSIS will not know which ? corresponds with which variable (it will try to map them in the order they appear.
Number 2 is the generally accepted method, unless you store your variable values in a table or something that the SP in number can access, in which case number 1 may be cleaner.
I have pasted straight to SQL command text yours script in OLE DB Source Editor and nothing changed in it. By pressing Parse Quesry.. I checked that SQL is correct. When I tried to use Build Query.., it said, that DECLARE is not supported. So don't use builder :)

Will index be used when using OR clause in where

I wrote a stored procedure with optional parameters.
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
SELECT columns
FROM table
WHERE (#startTime is NULL or table.StartTime >= #startTime) AND
(#endTIme is NULL or table.EndTime <= #endTime)
I'm wondering whether indexes on StartTime and EndTime will be used?
Yes they will be used (well probably, check the execution plan - but I do know that the optional-ness of your parameters shouldn't make any difference)
If you are having performance problems with your query then it might be a result of parameter sniffing. Try the following variation of your stored procedure and see if it makes any difference:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = #startTime
set #endTimeCopy = #endTime
SELECT columns
FROM table
WHERE (#startTimeCopy is NULL or table.StartTime >= #startTimeCopy) AND
(#endTimeCopy is NULL or table.EndTime <= #endTimeCopy)
This disables parameter sniffing (SQL server using the actual values passed to the SP to optimise it) - In the past I've fixed some weird performance issues doing this - I still can't satisfactorily explain why however.
Another thing that you might want to try is splitting your query into several different statements depending on the NULL-ness of your parameters:
IF #startTime is NULL
BEGIN
IF #endTime IS NULL
SELECT columns FROM table
ELSE
SELECT columns FROM table WHERE table.EndTime <= #endTime
END
ELSE
IF #endTime IS NULL
SELECT columns FROM table WHERE table.StartTime >= #startTime
ELSE
SELECT columns FROM table WHERE table.StartTime >= #startTime AND table.EndTime <= #endTime
BEGIN
This is messy, but might be worth a try if you are having problems - the reason it helps is because SQL server can only have a single execution plan per sql statement, however your statement can potentially return vastly different result sets.
For example, if you pass in NULL and NULL you will return the entire table and the most optimal execution plan, however if you pass in a small range of dates it is more likely that a row lookup will be the most optimal execution plan.
With this query as a single statement SQL server is forced to choose between these two options, and so the query plan is likely to be sub-optimal in certain situations. By splitting the query into several statements however SQL server can have a different execution plan in each case.
(You could also use the exec function / dynamic SQL to achieve the same thing if you preferred)
There is a great article to do with dynamic search criteria in SQL. The method I personally use from the article is the X=#X or #X IS NULL style with the OPTION (RECOMPILE) added at the end. If you read the article it will explain why
http://www.sommarskog.se/dyn-search-2008.html
Yes, based on the query provided indexes on or including the StartTime and EndTime columns can be used.
However, the [variable] IS NULL OR... makes the query not sargable. If you don't want to use an IF statement (because CASE is an expression, and can not be used for control of flow decision logic), dynamic SQL is the next alternative for performant SQL.
IF #startTime IS NOT NULL AND #endTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
AND endtime <= #endTime
END
ELSE IF #startTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE endtime <= #endTime
END
ELSE IF #endTIme IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
END
ELSE
BEGIN
SELECT columns
FROM TABLE
END
Dynamically changing searches based on the given parameters is a complicated subject and doing it one way over another, even with only a very slight difference, can have massive performance implications. The key is to use an index, ignore compact code, ignore worrying about repeating code, you must make a good query execution plan (use an index).
Read this and consider all the methods. Your best method will depend on your parameters, your data, your schema, and your actual usage:
Dynamic Search Conditions in T-SQL by by Erland Sommarskog
The Curse and Blessings of Dynamic SQL by Erland Sommarskog
The portion of the above articles that apply to this query is Umachandar's Bag of Tricks, but it is basically defaulting the parameters to some value to eliminate needing to use the OR. This will give the best index usage and overall performance:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = COALESCE(#startTime,'01/01/1753')
set #endTimeCopy = COALESCE(#endTime,'12/31/9999')
SELECT columns
FROM table
WHERE table.StartTime >= #startTimeCopy AND table.EndTime <= #endTimeCopy)
Probably not. Take a look at this blog posting from Tony Rogerson SQL Server MVP:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/05/17/444.aspx
You should at least get the idea that you need to test with credible data and examine the execution plans.
I don't think you can guarantee that the index will be used. It will depend a lot on the size of the table, the columns you are showing, the structure of the index and other factors.
Your best bet is to use SQL Server Management Studio (SSMS) and run the query, and include the "Actual Execution Plan". Then you can study that and see exactly which index or indices were used.
You'll often be surprised by what you find.
This is especially true if there in an OR or IN in the query.