I recently moved a bunch of tables from an existing database into a new database due to the database getting rather large. After doing so I noticed a dramatic decrease in performance of my queries when running against the new database.
The process I took to recreate the new database is this:
Generate Table CREATE scripts using sql servers automatic script
generator.
Run the create table scripts
Insert all data into new database
using INSERT INTO with a select from
the existing database.
Run all the alter scripts to create
the foreign keys and any indexes
Does anyone have any ideas of possible problems with my process, or some key step I'm missing that is causing this performance issue?
Thanks.
first I would an a mimimum make sure that auto create statistics is enabled you can also set auto update statistics to true
after that I would update the stats by running
sp_updatestats
or
UPDATE STATISTICS
Also realize that the first time you hit the queries it will be slower because nothing will be cached in RAM. On the second hit should be much faster
Did you script the indexes from the tables in the original database? Missing indexes could certainly account for poor performance.
Have you tried looking at the execution plans on each server when running these queries - that should allow you to easily see if they're doing something different e.g. table scanning due to a missing index, poor statistics etc.
Are both DBs sat on the same box with their data files on the same drive arrays?
Can you tell what about those queries got slower? New access plans? Same plans but they perform slower? Do they execute slower or are they suspended more? Did all queries got slower or just some? And last but not least, how do you know, ie. what exactly did you measure and how?
Some of the usual suspects could be:
The new storage is much slower (.mdf on slow disk, or on a busy disk)
You changed the data structure during move (ie. some indexes did not get ported)
You changed the data size (ie. compression options) resulting on more pages for the same data
Did anything else change at the same time, new app code or anything the like?
By extending the data size (you do no mention deleting the old tables) you are now trashing the buffer pool (did the page lifetime expectancy decreased in performance counters?)
Look on how you set up the initial size and growth options. If you didn't give it enough space to begin with or if you are growing by 1MB at a time that could be a cause of performance issues.
Related
I have an app that use collections of objects an is really fast.
Now, I am adding a database for persitancy, so I started to save things in SQLite database.
But now i found is much much slowly.
I think it is because the disk is slower than ram. it possible to have the DB in memory?
I found an Inmemory DB in the documentation of SQLite, but it is in memory and I need persitancy.
So, is it possible to have the DB in memory for perfoamnce and save to the disk after some time?
Thank you and regards.
Update:
In the answers they say that it is because I am doing lots of inserts, and this is true. They say I should make a bulk insert.
My code looks like this in a memory collection called Computers:
foreach (line in lines)
{
Computers.Add(new Computer { ComputerName = line});
}
Now in the DB:
foreach (line in lines)
{
string sql = "INSERT into computers VALUES ('"+computerName+"')";
SQLiteCommand command = new SQLiteCommand(sql, dbConnection); command.ExecuteNonQuery();
}
How can I do this in a single bulk insert?
You can use the Backup API to accomplish this. See http://www.sqlite.org/backup.html A complete code sample in C is provided.
If you are doing lots of inserts at once, make sure you combine them in a single transaction. Otherwise SQLite waits one disk rotation for each insert, which can greatly slow down the insertion process. Doing this may allow you to use a disk-based SQLite database, without the slowdowns.
See Also
SQLite Insert very slow?
If you have a database in memory and only occasionally flush it to disk, you lose one of the most important features of a database, which is ACID. In other words, you app can write data thinking it is persisted, then a subsequent problem can cause you to lose the data after all.
Many databases will perform quite well (though still much slower than in-RAM solutions) if you have enough RAM available for the database to cache frequently accessed items.
Start by examining what exactly is slow in your interactions with the database. Are you fetching data that doesn't change over and over? Perhaps cache that in the database? Are you making many separate updates to the database that could be combined into a single update?
I recently inherited a poorly maintained production database with heavily fragmented indexes (most of the indexes with more than 80% fragmented). I requested downtime with my manager to perform Index rebuild, but unfortunately downtime is not allowed at the moment. If online Index reorganize too is not option, can I do the following?
Restore a fresh production copy to a test instance
Rebuild Indexes, update statistics
Overwrite the prod database from test instance
Apply transaction logs to get the database.
Though the above method too requires downtime, but its relatively less. I wanted to know whether one can do this or I am just being stupid :) Please advise
RK
SQL Server 2005 has online index rebuilds (that is, non-blocking).
Otherwise, it's never offline (that is, database or server off line) but does have an exclusive locks on the table/index being rebuilt.
If most of the application's access is via seeks, then the fragmentation is not a problem. Otherwise I'd try to find some time in which the index rebuild will have minimal effect, and do it via a scheduled job. Surely the application isn't running 24/7/365 with poor maintenance, without the company expecting some problem to occur. (Do they change the oil on their cars?)
As far as your 4 step solution, copying the table to another database, rebuilding the index, and copying it back won't accomplish anything more than just rebuilding the existing index. On copying it back the index is rebuilt anyway, so just try and schedule a couple of tables at a time, until you get everything done.
Good luck.
I have a myISAM table running in production on mySQL, and by doing a few tests, we've found we can tremendously speed up a query by adding a certain compound index. So far so good. However, I am not really about the best way to add this index in a production environment without locking the table for a long time (it's got 27GBs of data, so not so much, but it does take a while).
Do you have any tips? If this was a more sophisticated setup of course we'd have a live replica of all of the data on another machine, and we could safely switch. Unfortunately, we're not there yet, and I would like to speed up this query as soon as possible (it's causing big customer headaches). Is there some simple way to replicate the data and then do a swap-out trick? Some other tricks that I am missing?
UPDATE: Reading about "Online Index Operations" in SQL Server makes me very jealous http://msdn.microsoft.com/en-us/library/ms191261.aspx :)
Thanks!
you can use replication to get downtime on the order of a couple minutes, instead of the hours it might take to create an index on that table.
to set up the slave, see http://dev.mysql.com/doc/refman/5.0/en/replication-howto-existingdata.html
a recommendation i can make to help speed up the process is in step 2 follow the "Creating a Data Snapshot Using Raw Data Files" method. but instead of copying over the wire to the slave, copy to a different location on the master. and bring the master back up as soon at the copy is done and you've made the necessary changes to the config file (set server-id and enabled binary logging). this will minimize your downtime to just a minute or two. once the server is back up, you can copy the copied files to the slave box.
once you have the slave up and running and you have verified everything is replicating properly, you can pause the slave. create the index on the salve. when the index creation is complete, resume the slave. this will catch the slave up to the master. on the master, use FLUSH TABLE WITH READ LOCK. check the slave status to make sure the log position on the master and the slave match. if they do, shut down the slave and copy the files for that table back to the master.
I'm with Randy. We've been in a similar situation, and there are two ways in MySQL to accomplish something like this:
Take down the server while it runs. This is what you'll probably do. It's simple, it's easy, it works. Time to do? Maybe a half hour/45 minutes, dependent on disk bandwidth. See below.
Make a new table with the new index, copy all the data over, pause the server delete the first table, alter the new one to the old name, start the server. Downtime? 10 minutes, maybe, but really complicated.
Option two works, and saves you the downtime of creating the index (if it takes a long time). But it takes more space, it's more complicated (since you have to deal with the new records inserted off the main table, and it will probably lock on MyISAM while copying the data out. Deleting a table will take some time, altering the table to the new name will take some time. It's just really complicated. If you had a 2TB table this might be useful, but for 27G it's probably overkill.
Do you have a second server that is close in specifications to your production server? Load up your most recent backup and do the index there, so you know about how long it will take to add. Then plan for downtime.
InnoDB is better about many things, but new indexes still lock the table. Those abilities that MSSQL (and I think PostgreSQL) have to do those kind of things without locking would be great.
Find your low usage window and take your application offline during the index build. Since you don't have replication or a multimaster or whatever, you're just going to have to bite the bullet on this one. See you at 1am. :-)
Not much you can do with one server here.
If you copy the table and do a dry run, at least you'll find out how long it's going to take without locking the live table, so you can schedule some maintenance time if necessary, or make a decision whether you can just push the button and leave users hanging for a couple of minutes :)
Or schedule it for a quiet time...
at 04:00 /usr/bin/mysql -uXXX -pXXX -e 'alter table mytable add key(col1, col2)'
We have a SQL Server 2000 production environment where suddenly (ie. the last 3 days) something has caused the tempdb data file to grow very large (45 gigs with a database which is only 10 gigs).
Yesterday, after it happened again we shrank the database and ran the major batch processes individually without any problems. However, this morning the database was back up to 45 gigs.
Is there a simple way to find out what is causing this database to grow so large? Ideally, something which could be looked at today but if that is not available something which can be set to get that information tomorrow.
BTW: Shrinking the database gets back the space within a few seconds.
Agreed with Jimmy, you need use SQL Profiler to find which temporary objects created so intensive. This may be temporary tables that uses some reports or something like.
I wanted to thank everyone for their answers as they definitely led to the cause of the problem.
We turned on SQL profiler and sure enough a large bulk load showed up. As we are working on a project to move the "offending" job to work also in mysql we will probably just watch things for now.
do you have a job running that rebuilds the index? It is possible that it uses SORT_IN_TEMPDB
or any other large queries that do sorting might expand tempdb
This may have something to do with the recovery model that the TempDB is set to. It may be set to FULL instead of BULK-LOGGED. FULL recovery increases the transaction log size until a backup is performed.
Look at the data file size vs. the transaction log size.
I'm not a DBA but some thoughts:
Is it possible that there are temp
tables being created but not dropped?
##tempTable?
Is it possible that
there is a large temp table being
created (and dropped) but the space
isn't reclaimed?
Are you doing any
sort of bulk loading where the system
might use the temp table? (I'm not
sure if you can) but can you turn on
Auto-Shrink for the tempdb?
I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases