Is there any performance difference between view and stored procedures - sql

I had large amount of data . I had write SQL Queries for all of these and retrieve data.My point is should i write these queries in views or SP's.
i.e i need to know is there is any major difference between
INSERT INTO TABLE TABLE_NAME EXEC SP
OR
INSERT INTO TABLE TABLE_NAME SELECT * FROM VIEW

Is there any major performance difference? No, but only if the query inside the stored proc is the exact same query inside the view. You should not see any major performance difference. If there is a performance difference, you won't notice it. If you start adding extra code to the proc (parameters, logic, etc.), then all bets are off.

Its the matter of art, or the ability to reuse (maintenance).
I personally, prefer using drop and create tables other than creating a complex view. Reason is simple, any one will have to understand the logic from one single screen other than opening multi tables, views, GUI code, and maybe Reports processes.

If the query inside the stored procedure is exactly the same as the query in the view. The query analizer should use the same execution plan.
I have ran some tests here, with the same query in a SP and a view. Both have similar execution times at around 6 seconds for each type with the SP being a bit slower. But this was just a test with a simple sql statement.

Related

Why doesn't exec sp_recompile sometimes not help parameter sniffing?

We have a complex stored procedure that is sometimes subject to parameter sniffing. It is a large, "all-in-one" procedure that is called by many different parts of the system and so it stands to reason that one query plan would not fit all use cases.
This works fine except periodically ONE particular report goes from seconds to minutes. In the past, a quick exec sp_recompile would speed it back up immediately. Now that never works. The report just eventually "fixes itself" in a day or two, meaning it goes back to taking seconds.
Refactoring the stored procedure is currently not an option and I don't want to do the other recommended approaches (saving parameters to local variables, WITH RECOMPILE, OPTIMIZE FOR UNKNOWN) as those are said to have other side effects.
So I have these questions:
Why wouldn't exec sp_recompile speed it up like before?
How can I tell if exec sp_recompile actually cleared the query plan cache? What should be run before, and after, the exec? I've tried some queries from the web but can't clearly tell if something changed, so a specific recipe would be great to have.
Would it be reasonable to clone the procedure with a different name, and call that clone just for this one report? The goal would be to get SQL Server to cache a separate plan just for the report. But I'm not sure if SQL Server is caching plans by procedure name, or if it caches the various queries inside the stored procedures. (If it's the latter, then there's no use to this approach, as the any clones of the procedure would have the same queries.)
Using several CTEs, especially complex queries (just like when joining with views) can potentially cause the query optimiser problems with producing an optimal execution plan.
If you have a lot of CTE definitions used, SQL Server will be attempting to construct a single monolithic execution plan and you could have a plan compilation timeout resulting in a sub-optimal plan being used.
You could instead replace the CTEs with temp tables - using intermediate results often has better performance as each query executes in isolation with a dedicated optimal (or at least better) plan. This can help the optimizer make a better choice for joins and index usage.
If you can benefit from two key different types of parameters that ideally require their own optimal plan then an option would be, as you suggest, to duplicate the procedure specific to each use-case.
You can confirm that this results in a separate execution plan by querying for your procedure name using dm_exec_sql_text
select s.plan_handle, t.text
from sys.dm_exec_query_stats s
cross apply Sys.dm_exec_sql_text(s.plan_handle)t
where t.text like '%proc name%'
You will note you have a different plan_handle for each procedure.

Comparing The Performance Of Indexed Views And Stored Procedures In SQL Server

I've just recently become aware of the fact that you can now index your views in SQL Server (see http://technet.microsoft.com/en-us/library/cc917715.aspx). I'm now trying to figure out when I'd get better performance from a query against an indexed view versus the same query inside a stored procedure that's had it's execution path cached?
Take for example the following:
SELECT colA, colB, sum(colC), sum(colD), colE
FROM myTable
WHERE colFDate < '9/30/2011'
GROUP BY colA, colB, colE
The date will be different every time it's run, so if this were a view, I wouldn't include the WHERE in the view and instead have that as part of my select against the view. If it were a stored procedure, the date would be a parameter. Note, there are about 300,000 rows in the table. 200,000 of them would meet the where clause with the date. 10,000 would be returned after the group by.
If this were an indexed view, should I expect to get better performance out of it than a stored procedure that's had an opportunity to cache the execution path? Or would the proc be faster? Or would the difference be negligible? I know we could say "just try both out" but there are too many factors that could falsely bias the results leading me down a false conclusion, so I'd like to hear more of the theory behind it and what the expected outcomes are instead.
Thanks!
An indexed view can be regarded like a normal table - it's a materialized collection of rows.
So the question really boils down to whether or not a "normal" query is faster than a stored procedure.
If you look at what steps the SQL Server goes through to execute any query (stored procedure call or ad-hoc SQL statement), you'll find (roughly) these steps:
syntactically check the query
if it's okay - it checks the plan cache to see if it already has an execution plan for that query
if there is an execution plan - that plan is (re-)used and the query executed
if there is no plan yet, an execution plan is determined
that plan is stored into the plan cache for later reuse
the query is executed
The point is: ad-hoc SQL and stored procedures are treatly no differently.
If an ad-hoc SQL query is properly using parameters - as it should anyway, to prevent SQL injection attacks - its performance characteristics are no different and most definitely no worse than executing a stored procedure.
Stored procedure have other benefits (no need to grant users direct table access, for instance), but in terms of performance, using properly parametrized ad-hoc SQL queries is just as efficient as using stored procedures.
Using stored procedures over non-parametrized queries is better for two main reasons:
since each non-parametrized query is a new, different query to SQL Server, it has to go through all the steps of determining the execution plan, for each query (thus wasting time - and also wasting plan cache space, since storing the execution plan into plan cache doesn't really help in the end, since that particular query will probably not be executed again)
non-parametrized queries are at risk of SQL injection attack and should be avoided at all costs
Now of course, if you're indexed view can reduce down the number rows significantly (by using a GROUP BY clause) - then of course that indexed view will be significantly faster than when you're running a stored procedure against the whole dataset. But that's not because of the different approaches taken - it's just a matter of scale - querying against a few dozen or few hundred rows will be faster than querying against 200'000 or more rows - no matter which way you query.

does it makes sense to do SP for simple queries like select * from users

is it going to be faster if instead of doing
select * from users where id = 1
or
delete from users where id = 1
or
select count(*) from users
I would create a SP for it ?
Performance-wise, no it doesn't make any difference.
Security-wise, it does made a difference. Using a sproc means you only need to grant execute permissions on the sproc, whereas the non-sproc approach would require permissions to be granted directly on the underlying table(s).
Network-traffic-wise - potential, slight/negligible difference. More applicable to larger statements whereby you either send the entire SQL statement across the wire or send just the sproc call. Pretty neglible overall.
Maintenance-wise - the sproc approach would allow you to (e.g.) tune a query without having to redeploy the whole application.
Something I'd be thinking is parameterising the query instead of using "hardcoded" values within an sql statement to support execution plan reuse.
If you're concerned about efficiency, don't do:
Select * From Users
Instead do:
Select column1, column2 From Users
Otherwise, SQL Server needs to do a lookup on all the columns in the Users table.
Personally, I wouldn't put something like this in a stored proc, but some people would, if they are doing all their data access via stored procedures.
If you are using SPs for all data access then yes. Otherwise I would not use it. It's never a good idea to have inconsistent behavior especially in source code.
I guess you are asking this question because you think that SPs are faster than inline queries sent by program but starting from sql server 2005 this is no more true as all execution plans are cached.
I'm leaning more towards a no on this one but then again I can see scenarios where you may want to do this.
For example, you may wish to implement a layer of security by utilizing stored procedures.
There is no improved plan reuse by using a stored procedure when considering the sample queries you have provided.

SP taking 15 minutes, but the same query when executed returns results in 1-2 minutes

So basically I have this relatively long stored procedure. The basic execution flow is that it SELECTS INTO some data into temp tables declared with the # sign and then runs a cursor through these tables to generate a 'running total' into a third temp table which is created using CREATE. Then this resulting temp table is joined with other tables in the DB to generated the result after some grouping etc. The problem is, this SP had been running fine until now returning results in 1-2 minutes. And now, suddenly, its taking 12-15 minutes. If I extract the query from the SP and executed it in management studio by manually setting the same parameters, it returns results in 1-2 minutes but the SP takes very long. Any idea what could be happening? I tried to generate the Actual Execution plans of both the query and the SP but it couldn't generate it because of the cursor. Any idea why the SP takes so long while the query doesn't?
This is the footprint of parameter-sniffing. See here for another discussion about it; SQL poor stored procedure execution plan performance - parameter sniffing
There are several possible fixes, including adding WITH RECOMPILE to your stored procedure which works about half the time.
The recommended fix for most situations (though it depends on the structure of your query and sproc) is to NOT use your parameters directly in your queries, but rather store them into local variables and then use those variables in your queries.
its due to parameter sniffing. first of all declare temporary variable and set the incoming variable value to temp variable and use temp variable in whole application here is an example below.
ALTER PROCEDURE [dbo].[Sp_GetAllCustomerRecords]
#customerId INT
AS
declare #customerIdTemp INT
set #customerIdTemp = #customerId
BEGIN
SELECT *
FROM Customers e Where
CustomerId = #customerIdTemp
End
try this approach
Try recompiling the sproc to ditch any stored query plan
exec sp_recompile 'YourSproc'
Then run your sproc taking care to use sensible paramters.
Also compare the actual execution plans between the two methods of executing the query.
It might also be worth recomputing any statistics.
I'd also look into parameter sniffing. Could be the proc needs to handle the parameters slighlty differently.
I usually start troubleshooting issues like that by using
"print getdate() + ' - step '". This helps me narrow down what's taking the most time. You can compare from where you run it from query analyzer and narrow down where the problem is at.
I would guess it could possible be down to caching. If you run the stored procedure twice is it faster the second time?
To investigate further you could run them both from management studio the stored procedure and the query version with the show query plan option turned on in management studio, then compare what area is taking longer in the stored procedure then when run as a query.
Alternativly you could post the stored procedure here for people to suggest optimizations.
For a start it doesn't sound like the SQL is going to perform too well anyway based on the use of a number of temp tables (could be held in memory, or persisted to tempdb - whatever SQL Server decides is best), and the use of cursors.
My suggestion would be to see if you can rewrite the sproc as a set-based query instead of a cursor-approach which will give better performance and be a lot easier to tune and optimise. Obviously I don't know exactly what your sproc does, to give an indication as to how easy/viable this is for you.
As to why the SP is taking longer than the query - difficult to say. Is there the same load on the system when you try each approach? If you run the query itself when there's a light load, it will be better than when you run the SP during a heavy load.
Also, to ensure the query truly is quicker than the SP, you need to rule out data/execution plan caching which makes a query faster for subsequent runs. You can clear the cache out using:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But only do this on a dev/test db server, not on production.
Then run the query, record the stats (e.g. from profiler). Clear the cache again. Run the SP and compare stats.
1) When you run the query for the first time it may take more time. One more point is if you are using any corellated sub query and if you are hardcoding the values it will be executed for only one time. When you are not hardcoding it and run it through the procedure and if you are trying to derive the value from the input value then it might take more time.
2) In rare cases it can be due to network traffic, also where we will not have consistency in the query execution time for the same input data.
I too faced a problem where we had to create some temp tables and then manipulating them had to calculate some values based on rules and finally insert the calculated values in a third table. This all if put in single SP was taking around 20-25 min. So to optimize it further we broke the sp into 3 different sp's and the total time now taken was around 6-8 mins. Just identify the steps that are involved in the whole process and how to break them up in different sp's. Surely by using this approach the overall time taken by the entire process will reduce.
This is because of parameter snipping. But how can you confirm it?
Whenever we supposed to optimize SP we look for execution plan. But in your case, you will see an optimized plan from SSMS because it's taking more time only when it called through Code.
For every SP and Function, the SQL server generates two estimated plans because of ARITHABORT option. One for SSMS and second is for the external entities(ADO Net).
ARITHABORT is by default OFF in SSMS. So if you want to check what exact query plan your SP is using when it calls from Code.
Just enable the option in SSMS and execute your SP you will see that SP will also take 12-13 minutes from SSMS.
SET ARITHABORT ON
EXEC YourSpName
SET ARITHABORT OFF
To solve this problem you just need to update the estimate query plan.
There are a couple of ways to update the estimate query plan.
1. Update table statistics.
2. recompile SP
3. SET ARITHABORT OFF in SP so it will always use query plan created for SSMS (this option is not recommended)
For more options please refer to this awesome article -
http://www.sommarskog.se/query-plan-mysteries.html
I would suggest the issue is related to the type of temp table (the # prefix). This temp table holds the data for that database session. When you run it through your app the temp table is deleted and recreated.
You might find when running in SSMS it keeps the session data and updates the table instead of creating it.
Hope that helps :)

Stored Procedure Timing out.. Drop, then Create and it's up again?

I have a web-service that calls a stored procedure from a MS-SQL2005 DB. My Web-Service was timing out on a call to one of the stored procedures I have (this has been in production for a couple of months with no timeouts), so I tried running the query in Query Analyzer which also timed out. I decided to drop and recreate the stored procedure with no changes to the code and it started performing again..
Questions:
Would this typically be an error in the TSQL of my Stored Procedure?
-Or-
Has anyone seen this and found that it is caused by some problem with the compilation of the Stored Procedure?
Also, of course, any other insights on this are welcome as well.
Similar:
SQL poor stored procedure execution plan performance - parameter sniffing
Parameter Sniffing (or Spoofing) in SQL Server
Have you been updating your statistics on the database? This sounds like the original SP was using an out-of-date query plan. sp_recompile might have helped rather than dropping/recreating it.
There are a few things you can do to fix/diagnose this.
1) Update your statistics on a regular/daily basis. SQL generates query plans (think optimizes) bases on your statistics. If they get "stale" your stored procedure might not perform as well as it used to. (especially as your database changes/grows)
2) Look a your stored procedure. Are you using temp tables? Do those temp tables have indexes on them? Most of the time you can find the culprit by looking at the stored procedure (or the tables it uses)
3) Analyze your procedure while it is "hanging" take a look at your query plan. Are there any missing indexes that would help keep your procedure's query plan from going nuts. (Look for things like table scans, and your other most expensive queries)
It is like finding a name in a phone book, sure reading every name is quick if your phone book only consists of 20 or 30 names. Try doing that with a million names, it is not so fast.
This happend to me after moving a few stored procs from development into production, It didn't happen right away, it happened after the production data grew over a couple months time. We had been using Functions to create columns. In some cases there were several function calls for each row. When the data grew so did the function call time.The original approach was good in a testing environment but failed under a heavy load. Check if there are any Function calls in the proc.
I think the table the SP is trying to use is locked by some process. Use "exec sp_who" and "exec sp_lock" to find out what is going on to your tables.
If it was working quickly, is (with the passage of several months) no longer working quickly, and the code has not been changed, then it seems likely that the underlying data has changed.
My first guess would be data growth--so much data has been added over the past few months (or past few hours, ya never know) that the query is now bogging down.
Alternatively, as CodeByMoonlight implies, the data may have changed so much over time that the original query plan built for the procedure is no longer good (though this assumes that the query plan has not cleared and recompiled at all over a long period of time).
Similarly Index/Database statistics may also be out of date. Do you have AutoUpdateSatistics on or off for the database?
Again, this might only help if nothing but the data has changed over time.
Parameter sniffing.
Answered a day or 3 ago: "strange SQL server report performance problem related with update statistics"