I am using SQL Server Execution plan for analysing the performance of a stored procedure. I have two results with and without the index. In both these results the estimated cost shows the same value (.0032831) but the cost % differs from one another as first, without index is 7% and with Index is 14%.
What does it really means?
Please help me with this.,
Thanks in advance.
It means that the plan without the index is costed as being doubly expensive.
For the first one the total plan cost is .0032831/0.07 = 0.0469, for the second one the total plan cost is .0032831/0.14 which is clearly going to be half that number.
Related
I have a query that performs very quickly but in production when server loads are high its performance is underwhelming. I have a suspicion that it might be the Estimated Rows being much lower than the Actual Rows in the execution plan. I know that server statistics are not stale.
I am now optimizing a new query and I worry that it will have the same problem in production. The number of rows returned and the CPU and Reads are well within the designated thresholds my data admins require. As you can see in the above SQL Sentry plan there are a few temp tables that estimate a single row but return 100 times as many rows.
My question is this, even when the number of rows are few, does a difference in rows by such a large percentage cause bottlenecks on the server's performance? Secondary question, if the problem isn't a bad cached plan or stale stats, what other issues would cause a plan to show such a discrepancy?
A difference between actual and estimated rows does not cause a "bottleneck" in the server.
The impact is on algorithms and resource allocation for the query. SQL Server has multiple algorithms that it can use for things like JOINs and GROUP BYs. The (estimated) size of the data is one of the primary items of information that it uses to choose the appropriate algorithm.
Choosing the wrong algorithm is not exactly a bottleneck, but it does slow the query down. You would need to study the execution plan to see if this is happening in your case.
If you have simple queries that select from a single table, then there are many fewer options for the execution plan. The only impact I can readily think of in this case would be using an full table scan rather than an index for filtering. For your data sizes, I don't think that would make much of a difference.
Estimate Rows vs Actual Rows, what is the impact on performance?
If there is huge difference between Estimate Rows and Actual Rows then you need to worry about that query.
There can be no of reason for this.
Stale Statistics
Skewed data distribution : Here Statistics is updated, but it is skewed.Create Filtered Statistics for those index will help.
Un-Optimize query :Poorly written query.Join condition are in wrong manner.
Why the cost of the execution plan that was generated based on stale statistics is cheaper than the cost of the plan based on updated statistics?
To understand my problem please to follow below scenario:
Assumption: auto statistics update is off.
When I updated statistics with full scan manually then I executed following batch including actual execution plan:
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
SELECT *
FROM AWSales -- table has 60000 rows
WHERE SalesOrderID = 44133
OPTION(RECOMPILE);
--returns 17 rows
The optimizer generated a plan that used non clustered index seek and key lookup - that was definitely fine.
Then I wanted to cheat the optimizer so I inserted 60 000 rows where input value for SalesOrderID column was 44133.
Without updating statistics I executed the mentioned batch again and the optimizer returned the same plan (with index seek) but with different cost (60 000 rows returned), of course.
Next I updated statistics with full scan manually for the table and I executed the batch again. This time the optimizer returned different plan with index scan operator. Predictable. At first glance it looked good. But when I compared the query plan costs it was more expensive than the cost of plan that used index seek. So after updating statistics query with optimal plan was slower. NONSENSE!
Then I wanted to compare costs of index seek before update and after update. Updated statistics caused the optimizer chose plan with index scan, so to force generating plan with index seek I added hint into the query. After executing it turned out that the actual query cost (forced index scan) was MUCH bigger then cost when statistics were stale. How is that possible?
For more details please look at sample script
Execution Plan Download Link: https://www.dropbox.com/s/3spvo46541bf6p1/Execution%20plan.xml?dl=0
I'm using SQL Server 2008 R2
I have a pretty complex stored procedure that's requesting way too much memory upon execution. Here's a screenshot of the execution plan:
http://s15.postimg.org/58ycuhyob/image.png
The underlying query probably needs a lot of tuning as indicated by massive number of estimated rows, but that's besides the point. Regardless of the complexity of the query, it should not be requesting 3 gigabytes of memory upon execution.
How do I prevent this behavior? I've tried the following:
DBCC FREEPROCCACHE to clear plan cache. This accomplished nothing.
Setting RECOMPILE option on both SP and SQL level. Again, this does nothing.
Messing around with MAXDOP option, from 0 to 8. Same issue.
The query returns about ~1k rows on average, and it does look into a table with more than 3 million rows with about 4 tables being joined. Executing the query returns the result in less than 3 seconds in majority of the cases.
Edit:
One more thing, using query hints is not really viable for this case since the parameters vary greatly for our case.
Edit2:
Uploaded execution plan upon request
Edit3:
I've tried rebuilding/reorganizing fragmented indices. Apparently, there were few but nothing too serious. Anyhow, this didn't reduce the amount of memory granted and didn't reduce the number of estimated rows (If this is somehow related).
You say optimizing the query is besides the point, but actually it's actually just the point. When a query is executed, SQL Server will -after generating the execution plan- reserve the memory needed for executing the query. The more rows intermediate results are estimated to hold the more memory is estimated to be required.
So, rewrite your query and/or create new indexes to get a decent query plan. A quick glance at the query plan shows some nested loops without join predicates and a number of table scans of which probably only a few records are used.
I have read references of cost of SQL statement everywhere in databases.
What exactly does it mean? So this is the number of statements to be executed or something?
Cost is rough measure of how much CPU time and disk I/O server must spend to execute the query.
Typically cost is split into few components:
Cost to parse the query - it can be non-trivial just to understand what you are asking for.
Cost to get data from disk and access indexes if it reduces the cost.
Cost to compute some mathematical operations on your data.
Cost to group or sort data.
Cost to deliver data back to client.
The cost is an arbitrary number that is higher if the CPU time/IO/Memory is higher. It is also vendor specific.
It means how much it will "cost" you to run a specific SQL query in terms of CPU, IO, etc.
For example Query A can cost you 1.2sec and Query B can cost you 1.8sec
See here:
Measuring Query Performance : "Execution Plan Query Cost" vs "Time Taken"
Cost of a query relates to how much CPU utilization and time will query take. This is an estimated value, your query might take less or more time depending on the data. If your tables and all the indexes of that table are analyzed(analyze table table_name compute statistics for table for all indexes for all indexed columns) then cost based result's estimate will suffice with your query execution time.
Theoretically the above answers can satisfy you.But when you are on the floor working here is an insight.
Practically you can access the cost by the number of seeks and scans your SQL query takes.
Go to Execution Plan and accordingly you will be able to optimize the amount of time (roughly the cost) your query takes.
A sample execution plan looks like this :
It means the query performance. How much optimized the query is...
I have an SQL Server 2000 query which performs an clustered index scan and displays a very high number of rows. For a table where I have 260.000 records, the actual number of rows displayed in the plan execution is ... 34.000.000.
Does this makes sense? What am I misunderstanding?
Thanks.
The values displayed on the execution plan are estimates based on statistics. Normal queries like:
SELECT COUNT(*) FROM Table
are 100% accurate for your transaction*.
Here's a related question.
*Edge cases may vary depending on transaction isolation level.
More information on stats:
Updating statistics
How often and how (maintenance plans++)
If your row counts are off from the query plan, you'll need to update the statistics or else the query optimizer will possibly be choosing the wrong plan. Also, a clustered index scan is almost like a table scan... try to fix up the indexes so you get a clustered index seek or at least an index seek.
But ... If it's the "Actual Number of Rows" ... why is that based on statistics?
I assumed that the Estimated Number of Rows is used when building the query plan (and colected from statistics at that time) and the Actual Number of Rows is some extra info added after the query execution, for user debug and tunning purposes?
Isn't this right?