SQL Execution plan changes during the day - sql

I have a query that never finishes (takes more that 2 hours) when I run it at 11am, and takes 1 min when I run it at 7pm. So obviously the execution plan changes during the day. What can cause that?
The data does not change. Auto stats are turned off, the stats and indexes get rebuild over night.
I run it in the client's environment, so I can't really tell what else uses the same SQL server. So this is my main concern, I reckon that the server gets so loaded by something else that it can't cope with my query.
At what parameters of the server shall I look to find the cause of the problem?
Where else should I look for a problem? What other factors shall I consider?
Any help will be highly appreciated.
UPDATE: Forgot to mention that I have a whole bunch of other big queries running at the same time (11am) and they all run fine. And actually I have 7 queries that run fine in the evening and don't finish in the morning. They all use the same view which joins quite large tables and that is the only difference between the queries that fail and the queries that don't. So I wonder if SQL server does not have enough memory or something to save the intermediate results of the view execution, and that is why the query never finishes.
So what parameters of the server shall I monitor to find the problem?
UPDATE And unfortunately I don't have the execution plan of the morning run due to the permissions
UPDATE I don't think the the table locks is the cause as I know what's using my database on the server and I can run the same query at say 12pm when nothing else is running from my side (i.e. should be no locks and uncommitted transactions on my tables) and the query takes the same awful amount of time.

Lots of things can impact this. Is this really a query or a stored procedure call? I query is re compiled each call a stored proc uses a cached plan. If the parameters for the stored procedure can provide wildly varying results then you can use the WITH RECOMPILE hint
If you can live with dirty reads I would put:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
at the top of your code. This will allow your query to access data being held in locks by other processes. The risk is that if the data changes your answer may not be 100% correct. If your data is always additive or a point in time value is acceptable then this works really well.
I'd go through the execution plan of the query in the evening and optimise that query even if 1 min is acceptable it may be uploading vast amounts of data which you have capacity for in the evening but are contending for in the morning.

Related

Timeout expired SQL Server 2008

I have a SQL Server database in production and it has been live for 2 months. A while ago the web application associated with it loading takes too long time. And sometimes it says timeout occurred.
Found a quick fix by running a command 'exec sp_updatestats' will fixed the problem. But I need to be run that one consistently (for every 5 minutes).
So I created a Windows service with timer and started on server. My question is what are the root causes and possible permanent solutions? Anyone?
Here is a Most expensive query from Activity Monitor
WAITFOR(RECEIVE TOP (1) message_type_name, conversation_handle, cast(message_body AS XML) as message_body from [SqlQueryNotificationService-2eea594b-f994-43be-a5ed-d9a47837a391]), TIMEOUT #p2;
To diagnose a poorly performing queries you need to:
Identify the poorly performing query, e.g. via application logging, a SQL Profiler trace filtered to show only queries with a longer duration than a certain threshold etc...
Get an execution plan for the query
At that point you can start to try to figure out what the performance issue is.
Given that exec sp_updatestats fixes your issue it could be that statistics on certain tables are out of date (this is a pretty common cause of performance issues). If thats the case then you might be able to tweak your statistics or at least rebuild only those statistics that are causing issues.
Its worth noting that updating statistics will also cause cached execution plans to become invalid, and so its plausible that your issue is unrelated to statistics - you need to collect more information about the poorly performing queries before looking at solutions.
Your most expensive query looks like its waiting for a message, i.e. its in your list of long running queries because its designed to run for a long time, not because its expensive.
Thanks for everyone i found a solution for my issue . Its quite different I've enabled sql dependency module on my sql server by setting up enable broker on , thats the one causing timeout query so by setting it to false everything is fine working now.

What are the factors that affect the time taken to run a SQL on a database?

I have a query that runs on a data warehouse. I ran the report last month. It gave me some results in say x minutes. The same report when run on the same database without any modifications to the database returns the same results but in y minutes now.
y>x. The difference between the time is so large.
The amount of data and the indexes are also the same. There is no difference in them.
Now clients ask for me for a reason for this. What are the possible reasons for this?
You leave a lot of questions open
is the database running on a dedicated server.
do you run the reports from clients or directly on the server.
have there been changes to the phyisical network, have some settings been changed.
did they (by accident) change the protocol to communicate with the server (tcp, named-pipes, ...)
have you tried defragmenting
have you rebooted the server
do you have an execution plan before and after
Most likely the query plan has changed. Some minor difference in data has pushed the query optimisers calculations onto a new, less optimal plan.
Here are a few:
The amount of data in the warehouse has changed.
Indexes might have been modified.
Your warehouse is split across different servers and there is connectivity lag between them...
Your database server is processing something else as well due to which it has lesser memory and cpu for ur reports to run.

Stored Procedure(s) slow on initial execution

Group,
I am still learning SQL, and I am running into a new problem. I have a couple stored procedures that are slow on their initial execution/connection. For example, I have a stored procedure (sp) that I use in an application to look up prices for a product. The first time I run the sp in the morning it may take 20-40 seconds to execute. After that it only takes 1-2 seconds... We also run daily updates on tables used by the sp(s) and after an update the cache is cleared and again it will take 20-40 seconds for the initial run and then 1-2 seconds after.
Is there any way around this? Not sure if I should add something to the Daily update to maybe fire my sp after the update (which could get messy) or if I can add something to my sp that tells it to not clear cache (which could cause space issues). I don't know what to do, everything works great after the initial execution...
Any suggestions are greatly appreciated.
The likely reason you're seeing the difference in speed is due to caching. Once you've run a SProc the execution plan goes into cache and it's much faster. What we did in our environment was to run our more used SProcs as a scheduled task around 7:30am so that they would be "warmed up" by 8am for our users when they started their workdays.
There are two potential reasons for this.
First, the first time any stored proc runs it must be compiled, and this takes time. The compiled plan is then (depending on vendor) cached so thgat subsequent executions do not have to recompile it. Even later, if it has not been executed in some time, the compiled plan in cache may have been overwritten and need to be re-compiled again.
Secondly, (again for most vendors), when you run any query, tjhe data pages needed to execute the query are read. But the query processor "reads" them from cache. Only if the data pages not in cache does the processor go to the disk. So, for the first time it needs the data ins ome time, it generally wil have t ogo to disk for the data, but subsequent executuions that need those same data pages will get them from in-memory cache. As disk I/O is orders of magnitude slower than RAM I/O, this can cause a very significant difference in query execution times.

Database Server 100%, for no reason

We have a Windows Server 2003 (x64) running as a Database server.
The Database is equipped with 32GB of RAM
Usually the Database memory usage (Task Manager) is between 5-10%.
However sometimes the Database shoots up suddenly to 100% and remains there, randomly and without any changes to code or executions.
All sort of research, paid or by me, has pointed to a single stored procedure.
When the database is 100%, disabling this procedure, will bring the database back to normal.
Now this sounds quite obvious but here is the strange part.
The stored procedure is optimized and the memory usage (from execution plan) is 0.01, which is extraordinarily good. Normally executing the stored procedure will return the resultset instantaneously. I also paid a RackSpace Fanatic Support DBA to look over this, and he said that he sees no problem with the stored procedure.
Now the extra wierd bit.
Running the SP is instantaneous.
When DB is 100%, running the SP, keeps on executing for minutes upon minutes.
Disabling the SP, sends the DB down to 5-10%.
Although the SP is enabled, DB is 100%, if I open a new query window and run the EXACT code from the SP, but as a query, not as a SP, the results are returned INSTANTANEOUSLY again
So, although at first glance, it sounds that the SP needs optimization, the actual code in the SP is not a problem.
I am desperate!
The execution plan can change depending on input parameters to the SP and the size of the result set.
You could try to add WITH RECOMPILE to the stored procedure to get a fresh execution plan for every call. It will make it a little bit slower but sometimes SQL Server gets stuck with a unusually bad execution plan for the majority of the queries and a recompile helps in those scenarios.
Profiler:
SQL Server comes with a great tool called Profiler which lets you see in real time the queries that are running on the server. You should run the profiler and find out what is actually happening and use that to find the culprit.
There are 4 measurements for Queries: Memory, CPU, Reads, Writes. The SQL statements that take up a lot of these (individually or combined), and are called with high frequency are the best candidates for optimization.
Once you run the profile and capture the output, you should be able to identify the items for optimization. You can then run the SQL Statements, review the execution plans and perform the necessary optimization on it.
(edit: added content)
It could be that it is not the statement itself that is not optimal, but some locking / blocking / deadlocks that could cause this. There may be something else running on the server at the same time that is taking up the resources needed for this SP and is causing the spike in CPU.
Read up on Profiler:
http://msdn.microsoft.com/en-us/library/ms187929.aspx

mysql slow on first query, then fast for related queries

I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases