I have a question about stored procedure sp_updatestats. My understanding is when upgrading from SQL Server 2000 to SQL Server 2008, we need to execute such procedure, but when upgrading from SQL Server 2005 to SQL Server 2008, there is no need to execute such stored procedure.
Is my understanding correct? Another question is why from 2000 to 2008 there is need to execute such stored procedure, any reference documents?
thanks in advance,
George
As per Randy Minder's answer, I've never done it.
You should have regular index and/or stats maintenance anyway. An index rebuild includes statistics anyway.
You may want to update stats more often though so you'd run sp_updatestats separately. For example, index rebuild as weekends, stats nightly.
Up to date statistics are extremely useful to the query optimiser.
This is not necessary, at least not in my experience. Just get into the habit of regularly rebuilding your indexes and your stats will be updated as well.
Related
I have SQL Server 2014 and I have a stored procedure which fill out its cache. I have to clear the cache with DBCC FREEPROCCACHE every third day, otherwise the stored procedure runtime grows from 8 seconds to 5 minutes.
We checked the stored procedure code but we have not found any point where could be the "parameter sniffing" happening.
We compared the plans from the cache but every plans are the same, so its look like the server could not reuse the cached plans and every time compile a new one.
We have several customers with the same DB, but only 1 reported this issue, the other servers at other customers are working fine.
Are there any SQL Server settings which could be cause this problem?
Somebody has any tips what could be wrong in the DB for example bad indexing or something like this?
In SQL Server 2008, if a stored procedure is created before indices is created, will the stored procedure use those indices after they have been created?
The short answer is yes it would. Stored procedures can even exist before the tables that they use exist.
A longer answer means you need to know about execution plans and the plan cache that SQL Server keeps. When a procedure is run, the plan for it (which can include the indexes to use) is cached and kept for a period of time. So it's possible that the index will get used immediately or after the current execution plan has expired from the cache.
Take a look at Execution plan basics for more info.
I have two SQL queries with about 2-3 INNER JOINS each. I need to do an INTERSECT between them.
Problem is that indiividually the queryes work fast, but after intersecting take about 4 seconds in total.
Now, if I put an OPTION (RECOMPILE) at the end of this whole query, the query works great again working quite fast returning almost instantly!.
I understand that option recopile forces a rebuild of execution plan, so I am confused now if my earler query taking 4 seconds is better or now the one with recompile, but taking 0 seconds is better.
Rather than answer the question you asked, here's what you should do:
Update your statistics:
EXEC sp_updatestats
If that doesn't work, rebuild indexes.
If that doesn't work, look at OPTIMIZE FOR
WITH RECOMPILE is specified SQL Server does not cache a plan for this stored procedure,
the stored procedure is recompiled each time it is executed.
Whenever a stored procedure is run in SQL Server for the first time, it is optimized and a query plan is compiled and cached in SQL Server's memory. Each time the same stored procedure is run after it is cached, it will use the same query plan eliminating the need for the same stored procedure from being optimized and compiled every time it is run. So if you need to run the same stored procedure 1,000 times a day, a lot of time and hardware resources can be saved and SQL Server doesn't have to work as hard.
you should not use this option because by using this option, you lose most of the advantages you get by substituting SQL queries with the stored procedures.
We have a query that is taking around 5 sec on our production system, but on our mirror system (as identical as possible to production) and dev systems it takes under 1 second.
We have checked out the query plans and we can see that they differ. Also from these plans we can see why one is taking longer than the other. The data, schame and servers are similar and the stored procedures identical.
We know how to fix it by re-arranging the joins and adding hints, However at the moment it would be easier if we didn't have to make any changes to the SProc (Paperwork). We have also tried a sp_recompile.
What could cause the difference between the two query plans?
System: SQL 2005 SP2 Enterprise on Win2k3 Enterprise
Update: Thanks for your responses, it turns out that it was statistics. See summary below.
Your statistics are most likely out of date. If your data is the same, recompute the statistics on both servers and recompile. You should then see identical query plans.
Also, double-check that your indexes are identical.
Most likely statistics.
Some thoughts:
Do you do maintenance on your non-prod systems? (eg rebuidl indexes, which will rebuild statistics)
If so, do you use the same fillfactor and statistics sample ratio?
Do you restore the database regularly onto test so it's 100% like production?
is the data & data size between your mirror and production as close to the same as possible?
If you know why one query taking longer then the other? can you post some more details?
Execution plans can be different in such cases because of the data in the tables and/or the statistics. Even in cases where auto update statistics is turned on, the statistics can get out of date (especially in very large tables)
You may find that the optimizer has estimated a table is not that large and opted for a table scan or something like that.
Provided there is no WITH RECOMPILE option on your proc, the execution plan will get cached after the first execution.
Here is a trivial example on how you can get the wrong query plan cached:
create proc spTest
#id int
as
select * from sysobjects where #id is null or id = id
go
exec spTest null
-- As expected its a clustered index scan
go
exec spTest 1
-- OH no its a clustered index scan
Try running your Sql in QA on the production server outside of the stored proc to determine if you have an issue with your statistics being out of date or mysterious indexes missing from production.
Tying in to the first answer, the problem may lie with SQL Server's Parameter Sniffing feature. It uses the first value that caused compilation to help create the execution plan. Usually this is good but if the value is not normal (or somehow strange), it can contribute to a bad plan. This would also explain the difference between production and testing.
Turning off parameter sniffing would require modifying the SProc which I understand is undesirable. However, after using sp_recompile, pass in parameters that you'd consider "normal" and it should recompile based off of these new parameters.
I think the parameter sniffing behavior is different between 2005 and 2008 so this may not work.
The solution was to recalculate the statistics. I overlooked that as usually we have scheduled tasks to do all of that, but for some reason the admins didn't put one one this server, Doh.
To summarize all the posts:
Check the setup is the same
Indexes
Table sizes
Restore Database
Execution Plan Caching
If the query runs the same outside the SProc, it's not the Execution Plan
sp_recompile if it is different
Parameter sniffing
Recompute Statistics
I have a web-service that calls a stored procedure from a MS-SQL2005 DB. My Web-Service was timing out on a call to one of the stored procedures I have (this has been in production for a couple of months with no timeouts), so I tried running the query in Query Analyzer which also timed out. I decided to drop and recreate the stored procedure with no changes to the code and it started performing again..
Questions:
Would this typically be an error in the TSQL of my Stored Procedure?
-Or-
Has anyone seen this and found that it is caused by some problem with the compilation of the Stored Procedure?
Also, of course, any other insights on this are welcome as well.
Similar:
SQL poor stored procedure execution plan performance - parameter sniffing
Parameter Sniffing (or Spoofing) in SQL Server
Have you been updating your statistics on the database? This sounds like the original SP was using an out-of-date query plan. sp_recompile might have helped rather than dropping/recreating it.
There are a few things you can do to fix/diagnose this.
1) Update your statistics on a regular/daily basis. SQL generates query plans (think optimizes) bases on your statistics. If they get "stale" your stored procedure might not perform as well as it used to. (especially as your database changes/grows)
2) Look a your stored procedure. Are you using temp tables? Do those temp tables have indexes on them? Most of the time you can find the culprit by looking at the stored procedure (or the tables it uses)
3) Analyze your procedure while it is "hanging" take a look at your query plan. Are there any missing indexes that would help keep your procedure's query plan from going nuts. (Look for things like table scans, and your other most expensive queries)
It is like finding a name in a phone book, sure reading every name is quick if your phone book only consists of 20 or 30 names. Try doing that with a million names, it is not so fast.
This happend to me after moving a few stored procs from development into production, It didn't happen right away, it happened after the production data grew over a couple months time. We had been using Functions to create columns. In some cases there were several function calls for each row. When the data grew so did the function call time.The original approach was good in a testing environment but failed under a heavy load. Check if there are any Function calls in the proc.
I think the table the SP is trying to use is locked by some process. Use "exec sp_who" and "exec sp_lock" to find out what is going on to your tables.
If it was working quickly, is (with the passage of several months) no longer working quickly, and the code has not been changed, then it seems likely that the underlying data has changed.
My first guess would be data growth--so much data has been added over the past few months (or past few hours, ya never know) that the query is now bogging down.
Alternatively, as CodeByMoonlight implies, the data may have changed so much over time that the original query plan built for the procedure is no longer good (though this assumes that the query plan has not cleared and recompiled at all over a long period of time).
Similarly Index/Database statistics may also be out of date. Do you have AutoUpdateSatistics on or off for the database?
Again, this might only help if nothing but the data has changed over time.
Parameter sniffing.
Answered a day or 3 ago: "strange SQL server report performance problem related with update statistics"