How to optimise this SQL function? - sql

(#PaymentId int)
returns int
as
begin
Declare #viewedCount int
Select #viewedCount = Count(OtSrno)
From OtTendersViewDetail
Where OTPaymentId = #PaymentId
And OTPaymentId is not null
return (#viewedCount)
end

Optimize is normally somthing you do for complex operations - this one from pure SQL is as good as it gets.
What you can do is "with recompile" or "optimize for unknown" to avoid parameter sniffing - which means reusing query plans.
What you can also do (which is better) is check for OTPaymentId being null and if/else between 2 different select statements. Again, this is about query plan reuse / getting stuck with a bad query plan.

Related

Create parameterized VIEW in SQL Server 2008

Can we create parameterized VIEW in SQL Server 2008.
Or Any other alternative for this ?
Try creating an inline table-valued function. Example:
CREATE FUNCTION dbo.fxnExample (#Parameter1 INTEGER)
RETURNS TABLE
AS
RETURN
(
SELECT Field1, Field2
FROM SomeTable
WHERE Field3 = #Parameter1
)
-- Then call like this, just as if it's a table/view just with a parameter
SELECT * FROM dbo.fxnExample(1)
If you view the execution plan for the SELECT you will not see a mention of the function at all and will actually just show you the underlying tables being queried. This is good as it means statistics on the underlying tables will be used when generating an execution plan for the query.
The thing to avoid would be a multi-statement table valued function as underlying table statistics will not be used and can result in poor performance due to a poor execution plan.
Example of what to avoid:
CREATE FUNCTION dbo.fxnExample (#Parameter1 INTEGER)
RETURNS #Results TABLE(Field1 VARCHAR(10), Field2 VARCHAR(10))
AS
BEGIN
INSERT #Results
SELECT Field1, Field2
FROM SomeTable
WHERE Field3 = #Parameter1
RETURN
END
Subtly different, but with potentially big differences in performance when the function is used in a query.
No, you cannot. But you can create a user defined table function.
in fact there exists one trick:
create view view_test as
select
*
from
table
where id = (select convert(int, convert(binary(4), context_info)) from master.dbo.sysprocesses
where
spid = ##spid)
...
in sql-query:
set context_info 2
select * from view_test
will be the same with
select * from table where id = 2
but using udf is more acceptable
As astander has mentioned, you can do that with a UDF. However, for large sets using a scalar function (as oppoosed to a inline-table function) the performance will stink as the function is evaluated row-by-row. As an alternative, you could expose the same results via a stored procedure executing a fixed query with placeholders which substitutes in your parameter values.
(Here's a somewhat dated but still relevant article on row-by-row processing for scalar UDFs.)
Edit: comments re. degrading performance adjusted to make it clear this applies to scalar UDFs.
no. You can use UDF in which you can pass parameters.

Avoiding branching in stored procedures that return search results?

I have an application that needs to return search results from a SQL Server 2008 database. I would like to use a single stored procedure to return the results but I am finding that as I build the stored procedure it is full of many Else .. Else If statements with the query repeated over and over with slight variations depending on the users search criteria.
Is there a better way to go about this? I am trying to avoid writing dynamic SQL because I would like the benefits of an execution plan but I am thinking there must be a better way. Does anyone have any suggestions or perhaps examples of how best to design a stored procedure that has to deal with many search parameters, many of which may be NULL? Thank you.
Not really.
With SQL Server 2005 and above with statement level recompilation then there is less of a penalty with OR clauses, just maintenance complexity.
Using Richard Harrison's approach makes it worse because OR is not-sargable, runs slowly, most likely won't use indexes.
Dynamic SQL opens up SQL injection, quoting and caching issues.
This leaves sp_executesql as per CountZero's answer which still requires building up strings.
The solution may not be code based... do you really need to search on all fields at any one time? I'd try to split into simple and advanced searches, or work out what the most common are and try to cover these queries.
I've always done this by using default values and conditions; e.g.
CREATE PROCEDURE [dbo].[searchForElement]
(
#Town nvarchar(100) = '',
#County nvarchar(100) = '',
#postcode nvarchar(100) = ''
)
AS
BEGIN
SET NOCOUNT ON;
SELECT <fields>
FROM table
WHERE
(#Town = '' OR Town LIKE '%'+#Town+'%')
AND (#County = '' OR County LIKE '%'+#County+'%')
AND (#postcode = '' OR postcode LIKE '%'+#PostCode +'%')
END
Edit:
As #gbn correctly advises the above will result in an index scan which may be a problem for large tables. If this is a problem the solution is to below using ISNULL and the fact that adding NULL to anything results in NULL it will allow an index seek because the '%' is understood by the optimiser (tested on SQL2008). This may be less readable but it makes better use of the indexes.
CREATE PROCEDURE [dbo].[searchForElement]
(
#Town nvarchar(100) = NULL,
#County nvarchar(100) = NULL,
#postcode nvarchar(100) = NULL
)
AS
BEGIN
SET NOCOUNT ON;
SELECT <fields>
FROM table
WHERE Town LIKE ISNULL('%'+#Town+'%', '%')
AND County LIKE ISNULL('%'+#County+'%', '%')
AND Postcode LIKE ISNULL('%'+#PostCode +'%', '%')
END
I always run into this problem myself. Tend to use dynamic SQL, as long as you use the sp_executesql then the optimizer will try to use the same execution plan.
http://ayyanar.blogspot.com/2007/11/performance-difference-between-exec-and.html

No query plan for procedure in SQL Server 2005

We have a SQL Server DB with 150-200 stored procs, all of which produce a viewable query plan in sys.dm_exec_query_plan except for one. According to http://msdn.microsoft.com/en-us/library/ms189747.aspx:
Under the following conditions, no Showplan output is returned in the query_plan column of the returned table for sys.dm_exec_query_plan:
If the query plan that is specified by using plan_handle has been evicted from the plan cache, the query_plan column of the returned table is null. For example, this condition may occur if there is a time delay between when the plan handle was captured and when it was used with sys.dm_exec_query_plan.
Some Transact-SQL statements are not cached, such as bulk operation statements or statements containing string literals larger than 8 KB in size. XML Showplans for such statements cannot be retrieved by using sys.dm_exec_query_plan unless the batch is currently executing because they do not exist in the cache.
If a Transact-SQL batch or stored procedure contains a call to a user-defined function or a call to dynamic SQL, for example using EXEC (string), the compiled XML Showplan for the user-defined function is not included in the table returned by sys.dm_exec_query_plan for the batch or stored procedure. Instead, you must make a separate call to sys.dm_exec_query_plan for the plan handle that corresponds to the user-defined function.
And later..
Due to a limitation in the number of nested levels allowed in the xml data type, sys.dm_exec_query_plan cannot return query plans that meet or exceed 128 levels of nested elements.
I'm confident that none of these apply to this procedure. The result never has a query plan, no matter what the timing, so 1 doesn't apply. There are no long string literals or bulk operations, so 2 doesn't apply. There are no user defined functions or dynamic SQL, so 3 doesn't apply. And there's little nesting, so the last doesn't apply. In fact, it's a very simple proc, which I'm including in full (with some table names changed to protect the innocent). Note that the parameter-sniffing shenanigans postdate the problem. It still happens even if I use the parameters directly in the query. Any ideas on why I don't have a viewable query plan for this proc?
ALTER PROCEDURE [dbo].[spGetThreadComments]
#threadId int,
#stateCutoff int = 80,
#origin varchar(255) = null,
#includeComments bit = 1,
#count int = 100000
AS
if (#count is null)
begin
select #count = 100000
end
-- copy parameters to local variables to avoid parameter sniffing
declare #threadIdL int, #stateCutoffL int, #originL varchar(255), #includeCommentsL bit, #countL int
select #threadIdL = #threadId, #stateCutoffL = #stateCutoff, #originL = #origin, #includeCommentsL = #includeComments, #countL = #count
set rowcount #countL
if (#originL = 'Foo')
begin
select * from FooComments (nolock) where threadId = #threadId and statusCode <= #stateCutoff
order by isnull(parentCommentId, commentId), dateCreated
end
else
begin
if (#includeCommentsL = 1)
begin
select * from Comments (nolock)
where threadId = #threadIdL and statusCode <= #stateCutoffL
order by isnull(parentCommentId, commentId), dateCreated
end
else
begin
select userId, commentId from Comments (nolock)
where threadId = #threadIdL and statusCode <= #stateCutoffL
order by isnull(parentCommentId, commentId), dateCreated
end
end
Hmm, perhaps the tables aren't really tables. They could be views or something else.
try putting dbo. or whatever the schema is in front of all of the table names, and then check again.
see this article:
http://www.sommarskog.se/dyn-search-2005.html
quote from the article:
As you can see, I refer to all tables
in two-part notation. That is, I also
specify the schema (which in SQL
7/2000 parlance normally is referred
to as owner.) If I would leave out the
schema, each user would get his own
his own private version of the query
plan

Can we write a sub function or procedure inside another stored procedure

I want to check if SQL Server(2000/05/08) has the ability to write a nested stored procedure, what I meant is - WRITING a Sub Function/procedure inside another stored procedure. NOT calling another SP.
Why I was thinking about it is- One of my SP is having a repeated lines of code and that is specific to only this SP.So if we have this nested SP feature then I can declare another sub/local procedure inside my main SP and put all the repeating lines in that. and I can call that local sp in my main SP. I remember such feature is available in Oracle SPs.
If SQL server is also having this feature, can someone please explain some more details about it or provide a link where I can find documentation.
Thanks in advance
Sai
I don't recommend doing this as each time it is created a new execution plan must be calculated, but YES, it definitely can be done (Everything is possible, but not always recommended).
Here is an example:
CREATE PROC [dbo].[sp_helloworld]
AS
BEGIN
SELECT 'Hello World'
DECLARE #sSQL VARCHAR(1000)
SET #sSQL = 'CREATE PROC [dbo].[sp_helloworld2]
AS
BEGIN
SELECT ''Hello World 2''
END'
EXEC (#sSQL)
EXEC [sp_helloworld2];
DROP PROC [sp_helloworld2];
END
You will get the warning
The module 'sp_helloworld' depends on the missing object 'sp_helloworld2'.
The module will still be created; however, it cannot run successfully until
the object exists.
You can bypass this warning by using EXEC('sp_helloworld2') above.
But if you call EXEC [sp_helloworld] you will get the results
Hello World
Hello World 2
It does not have that feature. It is hard to see what real benefit such a feature would provide, apart from stopping the code in the nested SPROC from being called from elsewhere.
Oracle's PL/SQL is something of a special case, being a language heavily based on Ada, rather than simple DML with some procedural constructs bolted on. Whether or not you think this is a good idea probably depends on your appetite for procedural code in your DBMS and your liking for learning complex new languages.
The idea of a subroutine, to reduce duplication or otherwise, is largely foreign to other database platforms in my experience (Oracle, MS SQL, Sybase, MySQL, SQLite in the main).
While the SQL-building proc would work, I think John's right in suggesting that you don't use his otherwise-correct answer!
You don't say what form your repeated lines take, so I'll offer three potential alternatives, starting with the simplest:
Do nothing. Accept that procedural
SQL is a primitive language lacking
so many "essential" constructs that
you wouldn't use it at all if it
wasn't the only option.
Move your procedural operations outside of the DBMS and execute them in code written in a more sophisticated language. Consider ways in which your architecture could be adjusted to extract business logic from your data storage platform (hey, why not redesign the whole thing!)
If the repetition is happening in DML, SELECTs in particular, consider introducing views to slim down the queries.
Write code to generate, as part of your build process, the stored procedures. That way if the repeated lines ever need to change, you can change them in one place and automatically generate the repetition.
That's four. I thought of another one as I was typing; consider it a bonus.
CREATE TABLE #t1 (digit INT, name NVARCHAR(10));
GO
CREATE PROCEDURE #insert_to_t1
(
#digit INT
, #name NVARCHAR(10)
)
AS
BEGIN
merge #t1 AS tgt
using (SELECT #digit, #name) AS src (digit,name)
ON (tgt.digit = src.digit)
WHEN matched THEN
UPDATE SET name = src.name
WHEN NOT matched THEN
INSERT (digit,name) VALUES (src.digit,src.name);
END;
GO
EXEC #insert_to_t1 1,'One';
EXEC #insert_to_t1 2,'Two';
EXEC #insert_to_t1 3,'Three';
EXEC #insert_to_t1 4,'Not Four';
EXEC #insert_to_t1 4,'Four'; --update previous record!
SELECT * FROM #t1;
What we're doing here is creating a procedure that lives for the life of the connection and which is then later used to insert some data into a table.
John's sp_helloworld does work, but here's the reason why you don't see this done more often.
There is a very large performance impact when a stored procedure is compiled. There's a Microsoft article on troubleshooting performance problems caused by a large number of recompiles, because this really slows your system down quite a bit:
http://support.microsoft.com/kb/243586
Instead of creating the stored procedure, you're better off just creating a string variable with the T-SQL you want to call, and then repeatedly executing that string variable.
Don't get me wrong - that's a pretty bad performance idea too, but it's better than creating stored procedures on the fly. If you can persist this code in a permanent stored procedure or function and eliminate the recompile delays, SQL Server can build a single execution plan for the code once and then reuse that plan very quickly.
I just had a similar situation in a SQL Trigger (similar to SQL procedure) where I basically had same insert statement to be executed maximum 13 times for 13 possible key values that resulted of 1 event. I used a counter, looped it 13 times using DO WHILE and used CASE for each of the key values processing, while kept a flag to figure out when I need to insert and when to skip.
it would be very nice if MS develops GOSUB besides GOTO, an easy thing to do!
Creating procedures or functions for "internal routines" polute objects structure.
I "implement" it like this
BODY1:
goto HEADER HEADER_RET1:
insert into body ...
goto BODY1_RET
BODY2:
goto HEADER HEADER_RET2:
INSERT INTO body....
goto BODY2_RET
HEADER:
insert into header
if #fork=1 goto HEADER_RET1
if #fork=2 goto HEADER_RET2
select 1/0 --flow check!
I too had need of this. I had two functions that brought back case counts to a stored procedure, which was pulling a list of all users, and their case counts.
Along the lines of
select name, userID, fnCaseCount(userID), fnRefCount(UserID)
from table1 t1
left join table2 t2
on t1.userID = t2.UserID
For a relatively tiny set (400 users), it was calling each of the two functions one time. In total, that's 800 calls out from the stored procedure. Not pretty, but one wouldn't expect a sql server to have a problem with that few calls.
This was taking over 4 minutes to complete.
Individually, the function call was nearly instantaneous. Even 800 near instantaneous calls should be nearly instantaneous.
All indexes were in place, and SSMS suggested no new indexes when the execution plan was analyzed for both the stored procedure and the functions.
I copied the code from the function, and put it into the SQL query in the stored procedure. But it appears the transition between sp and function is what ate up the time.
Execution time is still too high at 18 seconds, but allows the query to complete within our 30 second time out window.
If I could have had a sub procedure it would have made the code prettier, but still may have added overhead.
I may next try to move the same functionality into a view I can use in a join.
select t1.UserID, t2.name, v1.CaseCount, v2.RefCount
from table1 t1
left join table2 t2
on t1.userID = t2.UserID
left join vwCaseCount v1
on v1.UserID = t1.UserID
left join vwRefCount v2
on v2.UserID = t1.UserID
Okay, I just created views from the functions, so my execution time went from over 4 minutes, to 18 seconds, to 8 seconds. I'll keep playing with it.
I agree with andynormancx that there doesn't seem to be much point in doing this.
If you really want the shared code to be contained inside the SP then you could probably cobble something together with GOTO or dynamic SQL, but doing it properly with a separate SP or UDF would be better in almost every way.
For whatever it is worth, here is a working example of a GOTO-based internal subroutine. I went that way in order to have a re-useable script without side effects, external dependencies, and duplicated code:
DECLARE #n INT
-- Subroutine input parameters:
DECLARE #m_mi INT -- return selector
-- Subroutine output parameters:
DECLARE #m_use INT -- instance memory usage
DECLARE #m_low BIT -- low memory flag
DECLARE #r_msg NVARCHAR(max) -- low memory description
-- Subroutine internal variables:
DECLARE #v_low BIT, -- low virtual memory
#p_low BIT -- low physical memory
DECLARE #m_gra INT
/* ---------------------- Main script ----------------------- */
-- 1. First subroutine invocation:
SET #m_mi = 1 GOTO MemInfo
MI1: -- return here after invocation
IF #m_low = 1 PRINT '1:Low memory'
ELSE PRINT '1:Memory OK'
SET #n = 2
WHILE #n > 0
BEGIN
-- 2. Second subroutine invocation:
SET #m_mi = 2 GOTO MemInfo
MI2: -- return here after invocation
IF #m_low = 1 PRINT '2:Low memory'
ELSE PRINT '2:Memory OK'
SET #n = #n - 1
END
GOTO EndOfScript
MemInfo:
/* ------------------- Subroutine MemInfo ------------------- */
-- IN : #m_mi: return point: 1 for label MI1 and 2 for label MI2
-- OUT: #m_use: RAM used by isntance,
-- #m_low: low memory condition
-- #r_msg: low memory message
SET #m_low = 1
SELECT #m_use = physical_memory_in_use_kb/1024,
#p_low = process_physical_memory_low ,
#v_low = process_virtual_memory_low
FROM sys.dm_os_process_memory
IF #p_low = 1 OR #v_low = 1 BEGIN
SET #r_msg = 'Low memory.' GOTO LOWMEM END
SELECT #m_gra = cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = N'Memory Grants Pending'
IF #m_gra > 0 BEGIN
SET #r_msg = 'Memory grants pending' GOTO LOWMEM END
SET #m_low = 0
LOWMEM:
IF #m_mi = 1 GOTO MI1
IF #m_mi = 2 GOTO MI2
EndOfScript:
Thank you all for your replies!
I'm better off then creating one more SP with the repeating code and call that, which is the best way interms of performance and look wise.

SQL Server - Query Execution Plan For Conditional Statements

How do conditional statements (like IF ... ELSE) affect the query execution plan in SQL Server (2005 and above)?
Can conditional statements cause poor execution plans, and are there any form of conditionals you need to be wary of when considering performance?
** Edited to add ** :
I'm specifically referring to the cached query execution plan. For instance, when caching the query execution plan in the instance below, are two execution plans cached for each of the outcomes of the conditional?
DECLARE #condition BIT
IF #condition = 1
BEGIN
SELECT * from ...
END
ELSE
BEGIN
SELECT * from ..
END
You'll get plan recompiles often with that approach. I generally try to split them up, so you end up with:
DECLARE #condition BIT
IF #condition = 1
BEGIN
EXEC MyProc1
END
ELSE
BEGIN
EXEC MyProc2
END
This way there's no difference to the end users, and MyProc1 & 2 get their own, proper cached execution plans. One procedure, one query.