SQL Server stored procedure reducing amount of memory granted - sql

Execution Plan Download Link: https://www.dropbox.com/s/3spvo46541bf6p1/Execution%20plan.xml?dl=0
I'm using SQL Server 2008 R2
I have a pretty complex stored procedure that's requesting way too much memory upon execution. Here's a screenshot of the execution plan:
http://s15.postimg.org/58ycuhyob/image.png
The underlying query probably needs a lot of tuning as indicated by massive number of estimated rows, but that's besides the point. Regardless of the complexity of the query, it should not be requesting 3 gigabytes of memory upon execution.
How do I prevent this behavior? I've tried the following:
DBCC FREEPROCCACHE to clear plan cache. This accomplished nothing.
Setting RECOMPILE option on both SP and SQL level. Again, this does nothing.
Messing around with MAXDOP option, from 0 to 8. Same issue.
The query returns about ~1k rows on average, and it does look into a table with more than 3 million rows with about 4 tables being joined. Executing the query returns the result in less than 3 seconds in majority of the cases.
Edit:
One more thing, using query hints is not really viable for this case since the parameters vary greatly for our case.
Edit2:
Uploaded execution plan upon request
Edit3:
I've tried rebuilding/reorganizing fragmented indices. Apparently, there were few but nothing too serious. Anyhow, this didn't reduce the amount of memory granted and didn't reduce the number of estimated rows (If this is somehow related).

You say optimizing the query is besides the point, but actually it's actually just the point. When a query is executed, SQL Server will -after generating the execution plan- reserve the memory needed for executing the query. The more rows intermediate results are estimated to hold the more memory is estimated to be required.
So, rewrite your query and/or create new indexes to get a decent query plan. A quick glance at the query plan shows some nested loops without join predicates and a number of table scans of which probably only a few records are used.

Related

Estimate Rows vs Actual Rows, what is the impact on performance?

I have a query that performs very quickly but in production when server loads are high its performance is underwhelming. I have a suspicion that it might be the Estimated Rows being much lower than the Actual Rows in the execution plan. I know that server statistics are not stale.
I am now optimizing a new query and I worry that it will have the same problem in production. The number of rows returned and the CPU and Reads are well within the designated thresholds my data admins require. As you can see in the above SQL Sentry plan there are a few temp tables that estimate a single row but return 100 times as many rows.
My question is this, even when the number of rows are few, does a difference in rows by such a large percentage cause bottlenecks on the server's performance? Secondary question, if the problem isn't a bad cached plan or stale stats, what other issues would cause a plan to show such a discrepancy?
A difference between actual and estimated rows does not cause a "bottleneck" in the server.
The impact is on algorithms and resource allocation for the query. SQL Server has multiple algorithms that it can use for things like JOINs and GROUP BYs. The (estimated) size of the data is one of the primary items of information that it uses to choose the appropriate algorithm.
Choosing the wrong algorithm is not exactly a bottleneck, but it does slow the query down. You would need to study the execution plan to see if this is happening in your case.
If you have simple queries that select from a single table, then there are many fewer options for the execution plan. The only impact I can readily think of in this case would be using an full table scan rather than an index for filtering. For your data sizes, I don't think that would make much of a difference.
Estimate Rows vs Actual Rows, what is the impact on performance?
If there is huge difference between Estimate Rows and Actual Rows then you need to worry about that query.
There can be no of reason for this.
Stale Statistics
Skewed data distribution : Here Statistics is updated, but it is skewed.Create Filtered Statistics for those index will help.
Un-Optimize query :Poorly written query.Join condition are in wrong manner.

deleting 500 records from table with 1 million records shouldn't take this long

I hope someone can help me. I have a simple sql statement
delete from sometable
where tableidcolumn in (...)
I have 500 records I want to delete and recreate. The table recently grew to over 1 mill records. The problem is the statement above is taking over 5 minutes without completing. I have a primary key and 2 non clustered non unique indexes. My delete statement is using the primary key.
Can anyone help me understand why this statement is taking so long and how I can speed it up?
There are two areas I would look at first, locking and a bad plan.
Locking - run your query and while it is running see if it is being blocked by anything else "select * from sys.dm_exec_requests where blocking_session_id <> 0" if you see anything blocking your request then I would start with looking at:
https://www.simple-talk.com/sql/database-administration/the-dba-as-detective-troubleshooting-locking-and-blocking/
If there is no locking then get the execution plan for the insert, what is it doing? it it exceptionally high?
Other than that, how long do you expect it to take? Is it a little bit longer than that or a lot longer? Did it only get so slow after it grew significantly or has it been getting slower over a long period of time?
What is the I/O performance, what are your average read / write times etc etc.
TL;DR: Don't do that (instead of a big 'in' clause: preload and use a temporary table).
With the number of parameters, unknown backend configuration (even though it should be fine by today's standards) and not able to guess what your in-memory size may be during processing, you may be hitting (in order) a stack, batch or memory size limit, starting with this answer. Also possible to hit an instruction size limit.
The troubleshooting comments may lead you to another answer. My pivot's the 'in' clause, statement size, and that all of these links include advice to preload a temporary table and use that with your query.

Dynamic SQL - long execution time - first time only

I have stored procedure which is building dynamic SQL statement depending on its input parameters and then executed it.
One of the queries is causing time outs, so I have decided to check it. The first time (and only the first time) the issue statement is executed it is slow (30 secs - 45 secs) and every next execute takes 1-2 seconds.
In order to reproduce the issue, I am using
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
I am really confused where the problem is, because ordinary if SQL statement is slow, it is always slow. Now, it has long execution time only the first time.
Is is possible, the itself to be slow and needs optimization or the problem can be caused by something else?
The execution plan is below, but for me there is nothing strange with it:
From your reply to my comment, it would appear that the first time this query runs it is performing a lot of physical reads or read-ahead reads, meaning that a lot of IO is required to get the right pages into the buffer pool to satisfy this query.
Once pages are read into the buffer pool (memory) they generally stay there so that physical IO is not required to read them again (you can see this is happening as you indicated that the physical reads are converted to logical reads the second time the query is run). Memory is orders of magnitude faster than disk IO, hence the difference in speed for this query.
Looking at the plan, I can just about see that every read operation is being done against the clustered index of the table. As the clustered index contains every column for the row it is potentially fetching more data per row than is actually required for the query.
Unless you are selecting every column from every table, I would suggest that creating non-clustered covering indexes that satisfy this query (that are as narrow as possible), this will reduce the IO requirement for the query and make it less expensive the first time round.
Of course this may not be possible/viable for you to do, in which case you should either just take the hit on the first run and not empty the caches, or rewrite the query itself to be more efficient and perform less reads.
Its very simple the reason 1st and very 1st time it takes longer and then all later executions are done fairly quickly. the reason behind this mystery is "CACHED EXECUTION PLANS".
While working with Stored Procedures, Sql server takes the following
steps.
1) Parse Syntax of command. 2) Translate to Query Tree. 3) Develop
Execution Plan. 4) Execute.
The 1st two steps only take place when you create a Stored Procedure.
3rd step only takes place on very 1st Execution or if the CACHED PLAN has been flushed from the CACHE MEMORY.
Fourth Step takes place on every execution, and this is the only step that takes place after the very 1st execution if the Plan is still in cache memory.
In your case its quite understandable that very 1st execution took long and then later it gets executed fairly quickly.
To reproduce the "issue" you executed DBCC FREEPROCCACHE AND DBCC DROPCLEANBUFFERS commanda which basically Flushes the BUFFER CACHE MEMORY and causes your stored procedure to create a new Execution plan on it next execution. Hope this will clear the fog a little bit :)
Generally, when a Stored procedure is first created, or its statistics etc reset, it will take the first value passed into the Stored Procedure as the 'default' value for the stored procedure. It will then try to optimize itself based off of that.
To stop that from happening, there are a couple of things you can do.
You could potentially use the Query hints feature to mark certain variables as Unknown. So, as an example, at the end of the stored procedure you could put something along the lines of:
select * from foo where foo.bar = #myParam option (optimize for #myParam unknown)
As another approach, you could force the SQL plan to be re-compiled each time - which might be a good idea if your stored procedure is highly variable in the type of SQL it generates. The way you'd do that is:
select * from foo where foo.bar = #myParam option (optimize recompile)
Hope this helps.

SQL Server cache question

When I run a certain stored procedure for the first time it takes about 2 minutes to finish. When I run it for the second time it finished in about 15 seconds. I'm assuming that this is because everything is cached after the first run. Is it possible for me to "warm the cache" before I run this procedure for the first time? Is the cached information only used when I call the same stored procedure with the same parameters again or will it be used if I call the same stored procedure with different params?
When you peform your query, the data is read into memory in blocks. These blocks remain in memory but they get "aged". This means the blocks are tagged with the last access and when Sql Server requires another block for a new query and the memory cache is full, the least recently used block (the oldest) is kicked out of memory. (In most cases - full tables scan blocks are instantly aged to prevent full table scans overrunning memory and choking the server).
What is happening here is that the data blocks in memory from the first query haven't been kicked out of memory yet so can be used for your second query, meaning disk access is avoided and performance is improved.
So what your question is really asking is "can I get the data blocks I need into memory without reading them into memory (actually doing a query)?". The answer is no, unless you want to cache the entire tables and have them reside in memory permanently which, from the query time (and thus data size) you are describing, probably isn't a good idea.
Your best bet for performance improvement is looking at your query execution plans and seeing whether changing your indexes might give a better result. There are two major areas that can improve performance here:
creating an index where the query could use one to avoid inefficient queries and full table scans
adding more columns to an index to avoid a second disk read. For example, you have a query that returns columns A, and B with a where clause on A and C and you have an index on column A. Your query will use the index for column A requiring one disk read but then require a second disk hit to get columns B and C. If the index had all columns A, B and C in it the second disk hit to get the data can be avoided.
I don't think that generating the execution plan will cost more that 1 second.
I believe that the difference between first and second run is caused by caching the data in memory.
The data in the cache can be reused by any further query (stored procedure or simple select).
You can 'warm' the cache by reading the data through any select that reads the same data. But that will even cost about 90 seconds as well.
You can check the execution plan to find out which tables and indexes your query uses. You can then execute some SQL to get the data into the cache, depending on what you see.
If you see a clustered index seek, you can simply do SELECT * FROM my_big_table to force all the table's data pages into the cache.
If you see a non-clustered index seek, you could try SELECT first_column_in_index FROM my_big_table.
To force a load of a specific index, you can also use the WITH(INDEX(index)) table hint in your cache warmup queries.
SQL server cache data read from disc.
Consecutive reads will do less IO.
This is of great help since disk IO is usually the bottleneck.
More at:
http://blog.sqlauthority.com/2014/03/18/sql-server-performance-do-it-yourself-caching-with-memcached-vs-automated-caching-with-safepeak/
The execution plan (the cached info for your procedure) is reused every time, even with different parameters. It is one of the benefits of using stored procs.
The very first time a stored procedure is executed, SQL Server generates an execution plan and puts it in the procedure cache.
Certain changes to the database can trigger an automatic update of the execution plan (and you can also explicitly demand a recompile).
Execution plans are dropped from the procedure cache based an their "age". (from MSDN: Objects infrequently referenced are soon eligible for deallocation, but are not actually deallocated unless memory is required for other objects.)
I don't think there is any way to "warm the cache", except to perform the stored proc once. This will guarantee that there is an execution plan in the cache and any subsequent calls will reuse it.
more detailed information is available in the MSDN documentation: http://msdn.microsoft.com/en-us/library/ms181055(SQL.90).aspx

Long running Jobs Performance Tips

I have been working with SQL server for a while and have used lot of performance techniques to fine tune many queries. Most of these queries were to be executed within few seconds or may be minutes.
I am working with a job which loads around 100K of data and runs for around 10 hrs.
What are the things I need to consider while writing or tuning such query? (e.g. memory, log size, other things)
Make sure you have good indexes defined on the columns you are querying on.
Ultimately, the best thing to do is to actually measure and find the source of your bottlenecks. Figure out which queries in a stored procedure or what operations in your code take the longest, and focus on slimming those down, first.
I am actually working on a similar problem right now, on a job that performs complex business logic in Java for a large number of database records. I've found that the key is to process records in batches, and make as much of the logic as possible operate on a batch instead of operating on a single record. This minimizes roundtrips to the database, and causes certain queries to be much more efficient than when I run them for one record at a time. Limiting the batch size prevents the server from running out of memory when working on the Java side. Since I am using Hibernate, I also call session.clear() after every batch, to prevent the session from keeping copies of objects I no longer need from previous batches.
Also, an RDBMS is optimized for working with large sets of data; use normal SQL operations whenever possible. Avoid things like cursors, and a lot procedural programming; as other people have said, make sure you have your indexes set up correctly.
It's impossible to say without looking at the query. Just because you have indexes doesn't mean they are being used. You'll have to look at the execution plan and see if they are being used. They might show that they aren't useful to the execution plan.
You can start with looking at the estimated execution plan. If the job actually completes, you can wait for the actual execution plan. Look at parameter sniffing. Also, I had an extremely odd case on SQL Server 2005 where
SELECT * FROM l LEFT JOIN r ON r.ID = l.ID WHERE r.ID IS NULL
would not complete, yet
SELECT * FROM l WHERE l.ID NOT IN (SELECT r.ID FROM r)
worked fine - but only for particular tables. Problem was never resolved.
Make sure your statistics are up to date.
If possible post your query here so there is something to look at. I recall a query someone built with joins to 12 different tables dealing with around 4 or so million records that took around a day to run. I was able to tune that to run within 30 mins by eliminating the unnecessary joins. Where possible try to reduce the datasets you are joining before returning your results. Use plenty of temp tables, views etc if you need.
In cases of large datasets with conditions try to preapply your conditions through a view before your joins to reduce the number of records.
100k joining 100k is a lot bigger than 2k joining 3k