So it's rare that I get completely confused but wanted to see what others thought about what I'm experiencing.
I've been asked to look at a slow running stored procedure. Currently takes around 8-12 minutes to run.
So I noticed a useful index was missing and added it - now takes 8 seconds. But if I restart SQL Server and re-run the stored proc it's taking 8-12 minutes again.
If I then stop the query and delete the new index then re-run the stored proc it takes seconds again. Bizarre.
Has anyone ever experienced anything like this? The stored procedure is calling a View if that makes any difference.
Q. if I restart SQL Server and re-run the stored proc it's taking 8-12 minutes again.
When you perform your query, the data is read into memory in blocks. These blocks remain in memory but they get "aged". This means the blocks are tagged with the last access and when Sql Server requires another block for a new query and the memory cache is full, the least recently used block (the oldest) is kicked out of memory. (In most cases - full tables scan blocks are instantly aged to prevent full table scans overrunning memory and choking the server).
Q. If I then stop the query and delete the new index then re-run the stored proc it takes seconds again.
What is happening here is that the data blocks in memory from the first query haven't been kicked out of memory yet so can be used for your second query, meaning disk access is avoided and performance is improved.
I have a query that never finishes (takes more that 2 hours) when I run it at 11am, and takes 1 min when I run it at 7pm. So obviously the execution plan changes during the day. What can cause that?
The data does not change. Auto stats are turned off, the stats and indexes get rebuild over night.
I run it in the client's environment, so I can't really tell what else uses the same SQL server. So this is my main concern, I reckon that the server gets so loaded by something else that it can't cope with my query.
At what parameters of the server shall I look to find the cause of the problem?
Where else should I look for a problem? What other factors shall I consider?
Any help will be highly appreciated.
UPDATE: Forgot to mention that I have a whole bunch of other big queries running at the same time (11am) and they all run fine. And actually I have 7 queries that run fine in the evening and don't finish in the morning. They all use the same view which joins quite large tables and that is the only difference between the queries that fail and the queries that don't. So I wonder if SQL server does not have enough memory or something to save the intermediate results of the view execution, and that is why the query never finishes.
So what parameters of the server shall I monitor to find the problem?
UPDATE And unfortunately I don't have the execution plan of the morning run due to the permissions
UPDATE I don't think the the table locks is the cause as I know what's using my database on the server and I can run the same query at say 12pm when nothing else is running from my side (i.e. should be no locks and uncommitted transactions on my tables) and the query takes the same awful amount of time.
Lots of things can impact this. Is this really a query or a stored procedure call? I query is re compiled each call a stored proc uses a cached plan. If the parameters for the stored procedure can provide wildly varying results then you can use the WITH RECOMPILE hint
If you can live with dirty reads I would put:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
at the top of your code. This will allow your query to access data being held in locks by other processes. The risk is that if the data changes your answer may not be 100% correct. If your data is always additive or a point in time value is acceptable then this works really well.
I'd go through the execution plan of the query in the evening and optimise that query even if 1 min is acceptable it may be uploading vast amounts of data which you have capacity for in the evening but are contending for in the morning.
currently as a single user, it takes the 260ms for a certain query to run from start to finish.
what will happen if I have 1000 queries sent at the same time? should I expect the same query to take ~4 minutes? (260ms*1000)
It is not possible to make predictions without any knowledge of the situation. There will be a number of factors which affect this time:
Resources available to the server (if it is able to hold data in memory, things run quicker than if disk is being accessed)
What is involved in the query (e.g. a repeated query will usually execute quicker the second time around, assuming the underlying data has not changed)
What other bottlenecks are in the system (e.g. if the webserver and database server are on the same system, the two processes will be fighting for available resource under heavy load)
The only way to properly answer this question is to perform load testing on your application.
We have a Windows Server 2003 (x64) running as a Database server.
The Database is equipped with 32GB of RAM
Usually the Database memory usage (Task Manager) is between 5-10%.
However sometimes the Database shoots up suddenly to 100% and remains there, randomly and without any changes to code or executions.
All sort of research, paid or by me, has pointed to a single stored procedure.
When the database is 100%, disabling this procedure, will bring the database back to normal.
Now this sounds quite obvious but here is the strange part.
The stored procedure is optimized and the memory usage (from execution plan) is 0.01, which is extraordinarily good. Normally executing the stored procedure will return the resultset instantaneously. I also paid a RackSpace Fanatic Support DBA to look over this, and he said that he sees no problem with the stored procedure.
Now the extra wierd bit.
Running the SP is instantaneous.
When DB is 100%, running the SP, keeps on executing for minutes upon minutes.
Disabling the SP, sends the DB down to 5-10%.
Although the SP is enabled, DB is 100%, if I open a new query window and run the EXACT code from the SP, but as a query, not as a SP, the results are returned INSTANTANEOUSLY again
So, although at first glance, it sounds that the SP needs optimization, the actual code in the SP is not a problem.
I am desperate!
The execution plan can change depending on input parameters to the SP and the size of the result set.
You could try to add WITH RECOMPILE to the stored procedure to get a fresh execution plan for every call. It will make it a little bit slower but sometimes SQL Server gets stuck with a unusually bad execution plan for the majority of the queries and a recompile helps in those scenarios.
Profiler:
SQL Server comes with a great tool called Profiler which lets you see in real time the queries that are running on the server. You should run the profiler and find out what is actually happening and use that to find the culprit.
There are 4 measurements for Queries: Memory, CPU, Reads, Writes. The SQL statements that take up a lot of these (individually or combined), and are called with high frequency are the best candidates for optimization.
Once you run the profile and capture the output, you should be able to identify the items for optimization. You can then run the SQL Statements, review the execution plans and perform the necessary optimization on it.
(edit: added content)
It could be that it is not the statement itself that is not optimal, but some locking / blocking / deadlocks that could cause this. There may be something else running on the server at the same time that is taking up the resources needed for this SP and is causing the spike in CPU.
Read up on Profiler:
http://msdn.microsoft.com/en-us/library/ms187929.aspx
I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases