Functions in SQL Server 2008 - sql

Does sql server cache the execution plan of functions?

Yes, see rexem's Tibor link and Andrew's answer.
However... a simple table value function is unnested/expanded into the outer query anyway. Like a view. And my answer (with links) here
That is, this type:
CREATE FUNC dbo.Foo ()
RETURNS TABLE
AS
RETURN (SELECT ...)
GO

According to the dmv yes, http://msdn.microsoft.com/en-us/library/ms189747.aspx but I'd have to run a test to confirm.
Object ID in the output is "ID of the object (for example, stored procedure or user-defined function) for this query plan".
Tested it and yes it does look like they are getting a separate plan cache entry.
Test Script:
create function foo (#a int)
returns int
as
begin
return #a
end
The most basic of functions created.
-- clear out the plan cache
dbcc freeproccache
dbcc dropcleanbuffers
go
-- use the function
select dbo.foo(5)
go
-- inspect the plan cache
select * from sys.dm_exec_cached_plans
go
The plan cache then has 4 entries, the one listed as objtype = Proc is the function plan cache, grab the handle and crack it open.
select * from sys.dm_exec_query_plan(<insertplanhandlehere>)
The first adhoc on my test was the actual query, the 2nd ad-hoc was the query asking for the plan cache. So it definitely received a separate entry under a different proc type to the adhoc query being issued. The plan handle was also different, and when extracted using the plan handle it provides an object id back to the original function, whilst an adhoc query provides no object ID.

Related

SQL Server table-valued function executed code

Based on Row level security I have created a table-valued function:
CREATE FUNCTION Security.userAccessPredicate(#ValueId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT 1 AS accessResult
WHERE #ValueId =
(
SELECT Value
FROM dbo.Values
WHERE UserId = CAST(SESSION_CONTEXT(N'UserId') AS NVARCHAR(50))
) OR NULLIF(CAST(SESSION_CONTEXT(N'UserId') AS nvarchar(50)),'') IS NULL
);
CREATE SECURITY POLICY Security.userSecurityPolicy
ADD FILTER PREDICATE Security.userAccessPredicate(ValueUd) ON dbo.MainTable
Let's say MainTable contains milions of rows. Is userAccessPredicate calculating SELECT Value FROM dbo.Values for every row independently? If so it is ineffective I guess. How to check what exact code is generating when executing table-valued function? SQL Server Profiler isn't way because I am using Azure DB.
I am using SQL Server 2016 Management Studio.
Best way is to look at an execution plan with the policy turned off then turned on. You'll see the extra work its doing as a consequence. You're adding another table to query so its similar to doing a join but probably more efficient.
To answer your question, if you see the addition of a nested loop in the plan when the policy is on, then yes its going row-by-row Nested Loops
Also do the same with DBCC SHOW_STATISTICS to get a look at the resource hits too. With smaller tables i never saw any noticeable performance hits, < 100,000 rows in a similar implementation.
I found this link useful when getting into this before.
https://www.mssqltips.com/sqlservertip/4005/sql-server-2016-row-level-security-limitations-performance-and-troubleshooting/

Need help with SQL query on SQL Server 2005

We're seeing strange behavior when running two versions of a query on SQL Server 2005:
version A:
SELECT otherattributes.* FROM listcontacts JOIN otherattributes
ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234
ORDER BY name ASC
version B:
DECLARE #Id AS INT;
SET #Id = 1234;
SELECT otherattributes.* FROM listcontacts JOIN otherattributes
ON listcontacts.contactId = otherattributes.contactId
WHERE listcontacts.listid = #Id
ORDER BY name ASC
Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s.
Could anyone help us understand the difference in execution times of these two versions of SQL?
If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler:
EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = #id ORDER BY name ASC',
N'#id INT',
#id=1234;
...and this tends to perform as badly as version A.
Try take a look at the execution plan for your query. This should give you some more explanation on how your query is executed.
I've not seen the execution plans, but I strongly suspect that they are different in these two cases. The issue that you are having is that in case A (the faster query) the optimiser knows the value that you are using for the list id (1234) and using a combination of the distribution statistics and the indexes chooses an optimal plan.
In the second case, the optimiser is not able to sniff the value of the ID and so produces a plan that would be acceptable for any passed in list id. And where I say acceptable I do not mean optimal.
So what can you do to improve the scenario? There are a couple of alternatives here:
1) Create a stored procedure to perform the query as below:
CREATE PROCEDURE Foo
#Id INT
AS
SELECT otherattributes.* FROM listcontacts JOIN otherattributes
ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = #Id
ORDER BY name ASC
GO
This will allow the optimiser to sniff the value of the input parameter when passed in and produce an appropriate execution plan for the first execution. Unfortunately it will cache that plan for reuse later so unless the you generally call the sproc with similarly selective values this may not help you too much
2) Create a stored procedure as above, but specify it to be WITH RECOMPILE. This will ensure that the stored procedure is recompiled each time it is executed and hence produce a new plan optimised for this input value
3) Add OPTION (RECOMPILE) to the end of the SQL Statement. Forces recompilation of this statement, and is able to optimise for the input value
4) Add OPTION (OPTIMIZE FOR (#Id = 1234)) to the end of the SQL statement. This will cause the plan that gets cached to be optimised for this specific input value. Great if this is a highly common value, or most common values are similarly selective, but not so great if the distribution of selectivity is more widely spread.
It's possible that instead of casting 1234 to be the same type as listcontacts.listid and then doing the comparison with each row, it might be casting the value in each row to be the same as 1234. The first requires just one cast, the second needs a cast per row (and that's probably on far more than 1000 rows, it may be for every row in the table). I'm not sure what type that constant will be interpreted as but it may be 'numeric' rather than 'int'.
If this is the cause, the second version is faster because it's forcing 1234 to be interpreted as an int and thus removing the need to cast the value in every row.
However, as the previous poster suggests, the query plan shown in SQL Server Management Studio may indicate an alternative explanation.
The best way to see what is happening is to compare the execution plans, everything else is speculation based on the limited details presented in the question.
To see the execution plan, go into SQL Server Management Studio and run SET SHOWPLAN_XML ON then run query version A, the query will not run but the execution plan will be displayed in XML. Then run query version B and see its execution plan. If you still can't tell the difference or solve the problem, post both execution plans and someone here will explain it.

Use SQL to filter the results of a stored procedure

I've looked at other questions on Stack Overflow related to this question, but none of them seemed to answer this question clearly.
We have a system Stored Procedure called sp_who2 which returns a result set of information for all running processes on the server. I want to filter the data returned by the stored procedure; conceptually, I might do it like so:
SELECT * FROM sp_who2
WHERE login='bmccormack'
That method, though, doesn't work. What are good practices for achieving the goal of querying the returned data of a stored procedure, preferably without having to look of the code of the original stored procedure and modify it.
There are no good ways to do that. It is a limitation of stored procedures. Your options are:
Switch the procedure to a User Defined Function. All over world, today, people are making stored procedures that should be functions. It's an education issue. You situation is a good example why. If your procedure were instead a UDF, you could just do the following, exactly as you intuitively think you should be able to:
SELECT * FROM udf_who2()
WHERE login='bmccormack'
If you really can't touch your procedure, and must have this done in sql, then you'll have to get funky. Make another stored procedure to wrap your original procedure. Inside your new procedure call your existing procedure and put the values into a temporary table, then runs a query against that table with the filter you want, and return that result to the outside world.
Starting with SQL server 2005, user defined functions are how you encapsulate data retrieval. Stored Procedures, along with Views, are specialty tools to use in particular situations. They're both very handy at the right time, but not the first choice. Some might think that the above example (A) gets all the results of the function and then (B) filters on that resultset, like a subquery. This is not the case. SQL server 2005+ optimizes that query; if there is an index on login, you not see a table scan in the query execution plan; very efficient.
Edit: I should add that the innards of a UDF are similar to that of a SP. If it's messing with the logic of the SP that you want to avoid, you can still change it to a function. Several times I've taken large, scary procedures code that I did not want to have to understand, and successfully transferred it to a function. The only problem will be if the procedure modifies anything in addition to returning results; UDFs cannot modify data in the db.
The filtering of temporary table is the possible way.
-- Create tmp table from sp_who results
CREATE TABLE #TmpWho
(spid INT, ecid INT, status VARCHAR(150), loginame VARCHAR(150),
hostname VARCHAR(150), blk INT, dbname VARCHAR(150), cmd VARCHAR(150), request_id INT)
INSERT INTO #TmpWho
EXEC sp_who
-- filter temp table where spid is 52
SELECT * FROM #TmpWho
WHERE spid = 52
DROP TABLE #TmpWho
You can do an OPENROWSET(), but there are some security/performance issues involved.
SELECT *
FROM OPENROWSET ('SQLOLEDB', 'Server=(local);TRUSTED_CONNECTION=YES;', 'exec mystoredproc')
Traditionally, adding it to a temp variable/table will work.
Place the data in a Table variable or Temp table and filter on it.
OPENROWSET() is the way:
SELECT *
FROM
OPENROWSET('SQLNCLI', 'Server=(local);TRUSTED_CONNECTION=YES;', 'exec sp_who')
WHERE loginame = 'test' AND dbname = 'Expirement';
Also you need enable advance config before working:
sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;
GO

Select Fails With Nonexisitent Columns

Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com

add SQL Server index but how to recompile only affected stored procedures?

I need to add an index to a table, and I want to recompile only/all the stored procedures that make reference to this table. Is there any quick and easy way?
EDIT:
from SQL Server 2005 Books Online, Recompiling Stored Procedures:
As a database is changed by such actions as adding indexes or changing data in indexed columns, the original query plans used to access its tables should be optimized again by recompiling them. This optimization happens automatically the first time a stored procedure is run after Microsoft SQL Server 2005 is restarted. It also occurs if an underlying table used by the stored procedure changes. But if a new index is added from which the stored procedure might benefit, optimization does not happen until the next time the stored procedure is run after Microsoft SQL Server is restarted. In this situation, it can be useful to force the stored procedure to recompile the next time it executes
Another reason to force a stored procedure to recompile is to counteract, when necessary, the "parameter sniffing" behavior of stored procedure compilation. When SQL Server executes stored procedures, any parameter values used by the procedure when it compiles are included as part of generating the query plan. If these values represent the typical ones with which the procedure is called subsequently, then the stored procedure benefits from the query plan each time it compiles and executes. If not, performance may suffer
You can exceute sp_recompile and supply the table name you've just indexed. all procs that depend on that table will be flushed from the stored proc cache, and be "compiled" the next time they are executed
See this from the msdn docs:
sp_recompile (Transact-SQL)
They are generally recompiled automatically. I guess I don't know if this is guaranteed, but it has been what I have observed - if you change (e.g. add an index) the objects referenced by the sproc then it recompiles.
create table mytable (i int identity)
insert mytable default values
go 100
create proc sp1 as select * from mytable where i = 17
go
exec sp1
If you look at the plan for this execution, it shows a table scan as expected.
create index mytablei on mytable(i)
exec sp1
The plan has changed to an index seek.
EDIT: ok I came up with a query that appears to work - this gives you all sproc names that have a reference to a given table in the plan cache. You can concatenate the sproc name with the sp_recompile syntax to generate a bunch of sp_recompile statements you can then execute.
;WITH XMLNAMESPACES (default 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
,TableRefs (SProcName, ReferencedTableName) as
(
select
object_name(qp.objectid) as SProcName,
objNodes.objNode.value('#Database', 'sysname') + '.' + objNodes.objNode.value('#Schema', 'sysname') + '.' + objNodes.objNode.value('#Table', 'sysname') as ReferencedTableName
from sys.dm_exec_cached_plans cp
outer apply sys.dm_exec_sql_text(cp.plan_handle) st
outer apply sys.dm_exec_query_plan(cp.plan_handle) as qp
outer apply qp.query_plan.nodes('//Object[#Table]') as objNodes(objNode)
where cp.cacheobjtype = 'Compiled Plan'
and cp.objtype = 'Proc'
)
select
*
from TableRefs
where SProcName is not null
and isnull(ReferencedTableName,'') = '[db].[schema].[table]'
I believe that the stored procedures that would potentially benefit from the presence of the index in question will automatically have a new query plan generated, provided the auto generate statistics option has been enabled.
See the section entitled Recompiling Execution Plans for details of what eventualities cause an automatic recompilation.
http://technet.microsoft.com/en-us/library/ms181055(SQL.90).aspx