Does SQL Server optimize DATEADD calculation in select query? - sql

I have a query like this on Sql Server 2008:
DECLARE #START_DATE DATETIME
SET #START_DATE = GETDATE()
SELECT * FROM MY_TABLE
WHERE TRANSACTION_DATE_TIME > DATEADD(MINUTE, -1440, #START_DATE)
In the select query that you see above, does SqlServer optimize the query in order to not calculate the DATEADD result again and again. Or is it my own responsibility to store the DATEADD result on a temp variable?

SQL Server functions that are considered runtime constants are evaluated only once. GETDATE() is such a function, and DATEADD(..., constant, GETDATE()) is also a runtime constant. By leaving the actual function call inside the query you let the optimizer see what value will actually be used (as opposed to a variable value sniff) and then it can adjust its cardinality estimations accordingly, possibly coming up with a better plan.
Also read this: Troubleshooting Poor Query Performance: Constant Folding and Expression Evaluation During Cardinality Estimation.
#Martin Smith
You can run this query:
set nocount on;
declare #known int;
select #known = count(*) from sysobjects;
declare #cnt int = #known;
while #cnt = #known
select #cnt = count(*) from sysobjects where getdate()=getdate()
select #cnt, #known;
In my case after 22 seconds it hit the boundary case and the loop exited. The inportant thing is that the loop exited with #cnt zero. One would expect that if the getdate() is evaluated per row then we would get a #cnt different from the correct #known count, but not 0. The fact that #cnt is zero when the loop exists shows each getdate() was evaluated once and then the same constant value was used for every row WHERE filtering (matching none). I am aware that one positive example does not prove a theorem, but I think the case is conclusive enough.

Surprisingly, I've found that using GETDATE() inline seems to be more efficient than performing this type of calculation beforehand.
DECLARE #sd1 DATETIME, #sd2 DATETIME;
SET #sd1 = GETDATE();
SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, #sd1)
SELECT * FROM dbo.table
WHERE datetime_column > DATEADD(MINUTE, -1440, GETDATE())
SET #sd2 = DATEADD(MINUTE, -1440, #sd1);
SELECT * FROM dbo.table
WHERE datetime_column > #sd2;
If you check the plans on those, the middle query will always come out with the lowest cost (but not always the lowest elapsed time). Of course it may depend on your indexes and data, and you should not make any assumptions based on one query that the same pre-emptive optimization will work on another query. My instinct would be to not perform any calculations inline, and instead use the #sd2 variation above... but I've learned that I can't trust my instinct all the time and I can't make general assumptions based on behavior I experience in particular scenarios.

It will be executed just once. You can double check it by checking execution plan ("Compute Scalar"->Estimated Number of execution == 1)

Related

sql execution latency when assign to a variable

The following query will be ran in about 22 seconds:
DECLARE #i INT, #x INT
SET #i = 156567
SELECT
TOP 1
#x = AncestorId
FROM
dbo.tvw_AllProjectStructureParents_ChildView a
WHERE
ProjectStructureId = #i AND
a.NodeTypeCode = 42 AND
a.AncestorTypeDiffLevel = 1
OPTION (RECOMPILE)
The problem is with variable assignment (indeed this line: #x = AncestorId). when removing the assignment, it speeds up to about 15 miliseconds!
I solved it with inserting the result to a temp table but I think it is a bad way.
Can anyone help me what the source of problem is?!
P.S.
bad Execution plan (22s) : https://www.brentozar.com/pastetheplan/?id=Sy6a4c9bW
good execution plan (20ms) :https://www.brentozar.com/pastetheplan/?id=Byg8Hc5ZZ
When you use OPTION (RECOMPILE) SQL Server can generally perform parameter embedding optimisation.
The plan it is compiling is single use so it can sniff the values of all variables and parameters and treat them as constants.
A trivial example showing the parameter embedding optimisation in action and the effect of assigning to a variable is below (actual execution plans not estimated).
DECLARE #A INT = 1,
#B INT = 2,
#C INT;
SELECT TOP (1) number FROM master..spt_values WHERE #A > #B;
SELECT TOP (1) number FROM master..spt_values WHERE #A > #B OPTION (RECOMPILE);
SELECT TOP (1) #C = number FROM master..spt_values WHERE #A > #B OPTION (RECOMPILE);
The plans for this are below
Note the middle one does not even touch the table at all as SQL Server can deduce at compile time that #A > #B is not true. But plan 3 is back to including the table in the plan as the variable assignment evidently prevents the effect of OPTION (RECOMPILE) shown in plan 2.
(As an aside the third plan is not really 4-5 times as expensive as the first. Assigning to a variable also seems to suppress the usual row goal logic where the costs of the index scan would be scaled down to reflect the TOP 1)
In your good plan the #i value of 156567 is pushed right into the seek in the anchor leg of the recursive CTE, it returned 0 rows and so the recursive part had to do no work.
In your bad plan the recursive CTE gets fully materialised with 627,393 executions of the recursive sub tree and finally the predicate is applied on the resulting 627,393 rows (discarding all of them) at the end
I'm not sure why SQL Server can't push the predicate with a variable down. You haven't supplied the definitions of your tables - or the view with the recursive CTE. There is a similar issue with predicate pushing, views, and window functions though.
One solution would be to change the view to an inline table valued function that accepts a parameter for mainid and then add that in to the WHERE clause in the anchor part of the definition. Rather than relying on SQL Server to push the predicate down for you.
The difference comes probably from SELECT TOP 1.
When you have only field, SQL Server will take only first row. When you have variable assignment SQL Server is fetching all results but use only the top one.
I checked on different queries and it is not always a case, but probably here SQL Server optimization fails because of complexity of views/tables.
You can try following workaround:
DECLARE #i INT, #x INT
SET #i = 156567
SET #x = (SELECT
TOP 1
AncestorId
FROM
dbo.tvw_AllProjectStructureParents_ChildView a
WHERE
ProjectStructureId = #i AND
a.NodeTypeCode = 42 AND
a.AncestorTypeDiffLevel = 1)

Big difference in Estimated and Actual rows when using a local variable

This is my first post on Stackoverflow so I hope I'm correctly following all protocols!
I'm struggling with a stored procedure in which I create a table variable and filling this table with an insert statement using an inner join. The insert itself is simple, but it gets complicated because the inner join is done on a local variable. Since the optimizer doesn't have statistics for this variable my estimated row count is getting srewed up.
The specific piece of code that causes trouble:
declare #minorderid int
select #minorderid = MIN(lo.order_id)
from [order] lo with(nolock)
where lo.order_datetime >= #datefrom
insert into #OrderTableLog_initial
(order_id, order_log_id, order_id, order_datetime, account_id, domain_id)
select ot.order_id, lol.order_log_id, ot.order_id, ot.order_datetime, ot.account_id, ot.domain_id
from [order] ot with(nolock)
inner join order_log lol with(nolock)
on ot.order_id = lol.order_id
and ot.order_datetime >= #datefrom
where (ot.domain_id in (1,2,4) and lol.order_log_id not in ( select order_log_id
from dbo.order_log_detail lld with(nolock)
where order_id >= #minorderid
)
or
(ot.domain_id = 3 and ot.order_id not IN (select order_id
from dbo.order_log_detail_spa llds with(nolock)
where order_id >= #minorderid
)
))
order by lol.order_id, lol.order_log_id
The #datefrom local variable is also declared earlier in the stored procedure:
declare #datefrom datetime
if datepart(hour,GETDATE()) between 4 and 9
begin
set #datefrom = '2011-01-01'
end
else
begin
set #datefrom = DATEADD(DAY,-2,GETDATE())
end
I've also tested this with a temporary table in stead of a table variable, but nothing changes. However, when I replace the local variable >= #datefrom with a fixed datestamp then my estimates and actuals are almost the same.
ot.order_datetime >= #datefrom = SQL Sentry Plan Explorer
ot.order_datetime >= '2017-05-03 18:00:00.000' = SQL Sentry Plan Explorer
I've come to understand that there's a way to fix this by turning this code into a dynamic sp, but I'm not sure how to do this. I would be grateful if someone could give me suggestions on how to do this. Maybe I have to use a complete other approach? Forgive me if I forgot something to mention, this is my first post.
EDIT:
MSSQL version = 11.0.5636
I've also tested with trace flag 2453, but with no success
Best regards,
Peter
Indeed, the behavior what you are experiencing is because the variables. SQL Server won't store an execution plan for each and every possible inputs, thus for some queries the execution plan may or may not optimal.
To answer your explicit question: You'll have to create a varchar variable and build the query as a string, then execute it.
Some notes before the actual code:
This can be prone to SQL injection (in general)
SQL Server will store the plans separately, meaning they will use more memory and possibly knock out other plans from the cache
Using an imaginary setup, this is what you want to do:
DECLARE #inputDate DATETIME2 = '2017-01-01 12:21:54';
DELCARE #dynamiSQL NVARCHAR(MAX) = CONCAT('SELECT col1, col2 FROM MyTable WHERE myDateColumn = ''', FORMAT(#inputDate, 'yyyy-MM-dd HH:mm:ss'), ''';');
INSERT INTO #myTableVar (col1, col2)
EXEC sp_executesql #stmt = #dynamicSQL;
As an additional note:
you can try to use EXISTS and NOT EXISTS instead of IN and NOT IN.
You can try to use a temp table (#myTempTable) instead of a local variable and put some indexes on it. Physical temp tables can perform better with large amount of data and you can put indexes on it. (For more info you can go here: What's the difference between a temp table and table variable in SQL Server? or to the official documentation)

Stored procedure execution taking long because of function used inside

In SQL Server 2012 I have the following user defined function:
CREATE FUNCTION [dbo].[udfMaxDateTime]()
RETURNS datetime
AS
BEGIN
RETURN '99991231';
END;
This is then being used in a stored procedure like so:
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP
BY
COL1
);
Now, if I run the stored procedure with the above code, it takes around 12 seconds to execute. (1,2 million rows)
If I change the WHERE clauses to ValidTo = '99991231' then, the stored procedure runs in under 1 second and it runs in Parallel.
Could anyone try and explain why this is happening ?
It is not because of the user-defined function, it is because of the variable.
When you use a variable #MaxDate in the DELETE query optimizer doesn't know the value of this variable when generating the execution plan. So, it generates a plan based on available statistics on the ValidTo column and some built-in heuristics rules for cardinality estimates when you have an equality comparison in a query.
When you use a literal constant in the query the optimizer knows its value and can generate a more efficient plan.
If you add OPTION(RECOMPILE) the execution plan would not be cached and would be always regenerated and all parameter values would be known to the optimizer. It is quite likely that query will run fast with this option. This option does add a certain overhead, but it is noticeable only when you run a query very often.
DECLARE #MaxDate datetime = dbo.udfMaxDateTime();
DELETE FROM TABLE_NAME
WHERE
ValidTo = #MaxDate
AND
Id NOT IN
(
SELECT
MAX(Id)
FROM
TABLE_NAME
WHERE
ValidTo = #MaxDate
GROUP BY
COL1
)
OPTION(RECOMPILE);
I highly recommend to read Slow in the Application, Fast in SSMS by Erland Sommarskog.

Stored procedure date parameter filters - Ignore if Null

I am using the following SQL in my stored procedure to not filter by date parameters if they are null.
WHERE (Allocated >= ISNULL(#allocatedStartDate, '01/01/1900')
AND Allocated <= ISNULL(#allocatedEndDate,'01/01/3000'))
AND
(MatterOpened >= ISNULL(#matterOpenedStartDate, '01/01/1900')
AND MatterOpened <= ISNULL(#matterOpenedEndDate, '01/01/3000'))
Will this give any kind of performance hit when dealing with a lot of records?
Is there a better way to do this?
Number of records - around 500k
Or just let the query optimizer have it:
WHERE ( #allocatedStartDate is NULL or Allocated >= allocatedStartDate ) and
( #allocatedEndDate is NULL or Allocated <= #allocatedEndDate ) and
( #matterOpenedStartDate is NULL or MatterOpened >= #matterOpenedStartDate ) and
( #matterOpenedEndDate is NULL or MatterOpened <= #matterOpenedEndDate )
Note that this is not logically equivalent to your query. The last line uses column MatterOpened, not Allocated, as I assume that was a typographic error.
If performance is really an issue, you may want to consider adding indexes and changing the stored procedure to execute different queries based on the parameters. At least break it into: no filter, filter only on Allocated, filter only on MatterOpened, filter on both columns.
In a lot of cases, dynamic SQL can be better for you instead of trying to rely on the optimizer to cache a good plan for both NULL and non-NULL parameters.
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'SELECT
...
WHERE 1 = 1';
SET #sql = #sql + CASE WHEN #allocatedStartDate IS NOT NULL THEN
' AND Allocated >= ''' + CONVERT(CHAR(8), #allocatedStartDate, 112) + '''';
-- repeat for other clauses
EXEC sp_executesql #sql;
No, it's not fun to maintain, but each variation should get its own plan in the cache. You'll want to test with different settings for "Optimize for ad hoc workloads" and database-level paramaterization settings. Oops, just noticed 2005. Keep those in mind for the future (and any readers who aren't still stuck on 2005).
Also make sure to use EXEC sp_executesql and not EXEC.
Instead of checking to see if the variable is null in your query, check them at the beginning of your stored procedure and change the value to your default
SELECT
#allocatedStartDate = ISNULL(#allocatedStartDate, '01/01/1900'),
#allocatedEndDate = ISNULL(#allocatedEndDate,'01/01/3000'),
#matterOpenedStartDate = ISNULL(#matterOpenedStartDate, '01/01/1900'),
#matterOpenedEndDate = ISNULL(#matterOpenedEndDate, '01/01/3000')
Maybe something like this:
DECLARE #allocatedStartDate DATETIME=GETDATE()
DECLARE #allocatedEndDate DATETIME=GETDATE()-2
;WITH CTE AS
(
SELECT
ISNULL(#allocatedStartDate, '01/01/1900') AS allocatedStartDate,
ISNULL(#allocatedEndDate,'01/01/3000') AS allocatedEndDate
)
SELECT
*
FROM
YourTable
CROSS JOIN CTE
WHERE (Allocated >= CTE.allocatedStartDate
AND Allocated <= CTE.allocatedEndDate)
AND
(MatterOpened >= CTE.allocatedStartDate
AND Allocated <= CTE.allocatedEndDate)

Will index be used when using OR clause in where

I wrote a stored procedure with optional parameters.
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
SELECT columns
FROM table
WHERE (#startTime is NULL or table.StartTime >= #startTime) AND
(#endTIme is NULL or table.EndTime <= #endTime)
I'm wondering whether indexes on StartTime and EndTime will be used?
Yes they will be used (well probably, check the execution plan - but I do know that the optional-ness of your parameters shouldn't make any difference)
If you are having performance problems with your query then it might be a result of parameter sniffing. Try the following variation of your stored procedure and see if it makes any difference:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = #startTime
set #endTimeCopy = #endTime
SELECT columns
FROM table
WHERE (#startTimeCopy is NULL or table.StartTime >= #startTimeCopy) AND
(#endTimeCopy is NULL or table.EndTime <= #endTimeCopy)
This disables parameter sniffing (SQL server using the actual values passed to the SP to optimise it) - In the past I've fixed some weird performance issues doing this - I still can't satisfactorily explain why however.
Another thing that you might want to try is splitting your query into several different statements depending on the NULL-ness of your parameters:
IF #startTime is NULL
BEGIN
IF #endTime IS NULL
SELECT columns FROM table
ELSE
SELECT columns FROM table WHERE table.EndTime <= #endTime
END
ELSE
IF #endTime IS NULL
SELECT columns FROM table WHERE table.StartTime >= #startTime
ELSE
SELECT columns FROM table WHERE table.StartTime >= #startTime AND table.EndTime <= #endTime
BEGIN
This is messy, but might be worth a try if you are having problems - the reason it helps is because SQL server can only have a single execution plan per sql statement, however your statement can potentially return vastly different result sets.
For example, if you pass in NULL and NULL you will return the entire table and the most optimal execution plan, however if you pass in a small range of dates it is more likely that a row lookup will be the most optimal execution plan.
With this query as a single statement SQL server is forced to choose between these two options, and so the query plan is likely to be sub-optimal in certain situations. By splitting the query into several statements however SQL server can have a different execution plan in each case.
(You could also use the exec function / dynamic SQL to achieve the same thing if you preferred)
There is a great article to do with dynamic search criteria in SQL. The method I personally use from the article is the X=#X or #X IS NULL style with the OPTION (RECOMPILE) added at the end. If you read the article it will explain why
http://www.sommarskog.se/dyn-search-2008.html
Yes, based on the query provided indexes on or including the StartTime and EndTime columns can be used.
However, the [variable] IS NULL OR... makes the query not sargable. If you don't want to use an IF statement (because CASE is an expression, and can not be used for control of flow decision logic), dynamic SQL is the next alternative for performant SQL.
IF #startTime IS NOT NULL AND #endTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
AND endtime <= #endTime
END
ELSE IF #startTime IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE endtime <= #endTime
END
ELSE IF #endTIme IS NOT NULL
BEGIN
SELECT columns
FROM TABLE
WHERE starttime >= #startTime
END
ELSE
BEGIN
SELECT columns
FROM TABLE
END
Dynamically changing searches based on the given parameters is a complicated subject and doing it one way over another, even with only a very slight difference, can have massive performance implications. The key is to use an index, ignore compact code, ignore worrying about repeating code, you must make a good query execution plan (use an index).
Read this and consider all the methods. Your best method will depend on your parameters, your data, your schema, and your actual usage:
Dynamic Search Conditions in T-SQL by by Erland Sommarskog
The Curse and Blessings of Dynamic SQL by Erland Sommarskog
The portion of the above articles that apply to this query is Umachandar's Bag of Tricks, but it is basically defaulting the parameters to some value to eliminate needing to use the OR. This will give the best index usage and overall performance:
CREATE PROCEDURE dbo.GetActiveEmployee
#startTime DATETIME=NULL,
#endTime DATETIME=NULL
AS
SET NOCOUNT ON
DECLARE #startTimeCopy DATETIME
DECLARE #endTimeCopy DATETIME
set #startTimeCopy = COALESCE(#startTime,'01/01/1753')
set #endTimeCopy = COALESCE(#endTime,'12/31/9999')
SELECT columns
FROM table
WHERE table.StartTime >= #startTimeCopy AND table.EndTime <= #endTimeCopy)
Probably not. Take a look at this blog posting from Tony Rogerson SQL Server MVP:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/05/17/444.aspx
You should at least get the idea that you need to test with credible data and examine the execution plans.
I don't think you can guarantee that the index will be used. It will depend a lot on the size of the table, the columns you are showing, the structure of the index and other factors.
Your best bet is to use SQL Server Management Studio (SSMS) and run the query, and include the "Actual Execution Plan". Then you can study that and see exactly which index or indices were used.
You'll often be surprised by what you find.
This is especially true if there in an OR or IN in the query.