We have a C# application that receives a file each day with ~35,000,000 rows. It opens the file, parses each record individually, formats some of the fields and then inserts one record at a time into a table. It's really slow, which is expected, but I've been asked to optimize it.
I have instructed that that any optimizations must be contained to SQL only. i.e., there can be no changes to the process or the C# code. I'm trying tom come up with ideas on how I can speed up this process while being limited to SQL modifications only. I have a couple of ideas I want to try but I'd also like feedback from anyone who has found themselves in this situation before.
Ideas:
1. Create a clustered index on the table so the insert always occurs at the tale end of the file. The records in the file are ordered by date/time and the current table has no clustered index so this seems like a valid approach.
Somehow reduce the logging overheard. This data is volatile in nature so it's not a big deal to be able to rollback. Even if the process blew up halfway through, they would just restart it.
Change the isolation level. Perhaps there is an isolation level that is more suited for sequential single-record inserts.
Reduce connection time. The C# app is opening/closing a connection for each insert. We can't change the C# code though so perhaps there is a trick to reducing overhead/time to make a connection.
I appreciate anyone taking the time to read my post and throw out any ideas they feel would be worth it.
Thanks,
Dean
I would suggest the following -- if possible.
Load the data into a staging table.
Do the transformations in SQL.
Bulk insert the data into the final table.
The second suggestion would be:
Modify the C# code to write the data into a file.
Bulk insert the file, either into a staging table or the final table.
Unfortunately, your problem is 35 million round trips from C# to the database. I doubt there is any database optimization that can fix that performance problem. In other words, you need to change the C# code to fix the performance issue. Anything else is probably just a waste of your time.
You can minimize logging either by using simple recovery or writing to a temporary table. Either of those might help. However, consider the second option, because it would be a minor change to the C# code and could result in big improvements.
Or, if you have to do the best in a really bad situation:
Run the C# code and database on the same server. Be sure it has lots of processors.
Attach lots of SSD or memory for the database (if you are not already using it).
Load the data into table spaces that are only on SSD or in memory.
Copy the data from the local database to the remote one.
When doing an ALTER TABLE statement in MySQL, the whole table is read-locked (allowing concurrent reads, but prohibiting concurrent writes) for the duration of the statement. If it's a big table, INSERT or UPDATE statements could be blocked for a looooong time. Is there a way to do a "hot alter", like adding a column in such a way that the table is still updatable throughout the process?
Mostly I'm interested in a solution for MySQL but I'd be interested in other RDBMS if MySQL can't do it.
To clarify, my purpose is simply to avoid downtime when a new feature that requires an extra table column is pushed to production. Any database schema will change over time, that's just a fact of life. I don't see why we should accept that these changes must inevitably result in downtime; that's just weak.
The only other option is to do manually what many RDBMS systems do anyway...
- Create a new table
You can then copy the contents of the old table over a chunk at a time. Whilst always being cautious of any INSERT/UPDATE/DELETE on the source table. (Could be managed by a trigger. Although this would cause a slow down, it's not a lock...)
Once finished, change the name of the source table, then change the name of the new table. Preferably in a transaction.
Once finished, recompile any stored procedures, etc that use that table. The execution plans will likely no longer be valid.
EDIT:
Some comments have been made about this limitation being a bit poor. So I thought I'd put a new perspective on it to show why it's how it is...
Adding a new field is like changing one field on every row.
Field Locks would be much harder than Row locks, never mind table locks.
You're actually changing the physical structure on the disk, every record moves.
This really is like an UPDATE on the Whole table, but with more impact...
Percona makes a tool called pt-online-schema-change that allows this to be done.
It essentially makes a copy of the table and modifies the new table. To keep the new table in sync with the original it uses triggers to update. This allows the original table to be accessed while the new table is prepared in the background.
This is similar to Dems suggested method above, but this does so in an automated fashion.
Some of their tools have a learning curve, namely connecting to the database, but once you have that down, they are great tools to have.
Ex:
pt-online-schema-change --alter "ADD COLUMN c1 INT" D=db,t=numbers_are_friends
This question from 2009. Now MySQL offers a solution:
Online DDL (Data Definition Language)
A feature that improves the performance, concurrency, and availability
of InnoDB tables during DDL (primarily ALTER TABLE) operations. See
Section 14.11, “InnoDB and Online DDL” for details.
The details vary according to the type of operation. In some cases,
the table can be modified concurrently while the ALTER TABLE is in
progress. The operation might be able to be performed without doing a
table copy, or using a specially optimized type of table copy. Space
usage is controlled by the innodb_online_alter_log_max_size
configuration option.
It lets you adjust the balance between performance and concurrency during the DDL operation, by choosing whether to block access to the table entirely (LOCK=EXCLUSIVE clause), allow queries but not DML (LOCK=SHARED clause), or allow full query and DML access to the table (LOCK=NONE clause). When you omit the LOCK clause or specify LOCK=DEFAULT, MySQL allows as much concurrency as possible depending on the type of operation.
Performing changes in-place where possible, rather than creating a new copy of the table, avoids temporary increases in disk space usage and I/O overhead associated with copying the table and reconstructing secondary indexes.
see MySQL 5.6 Reference Manual -> InnoDB and Online DDL for more info.
It seems that online DDL also available in MariaDB
Alternatively you can use ALTER ONLINE TABLE to ensure that your ALTER
TABLE does not block concurrent operations (takes no locks). It is
equivalent to LOCK=NONE.
MariaDB KB about ALTER TABLE
See Facebook's online schema change tool.
http://www.facebook.com/notes/mysql-at-facebook/online-schema-change-for-mysql/430801045932
Not for the faint of heart; but it will do the job.
I recommend Postgres if that's an option. With postgres there is essentially no downtime with the following procedures:
ALTER TABLE ADD COLUMN (if the column can be NULL)
ALTER TABLE DROP COLUMN
CREATE INDEX (must use CREATE INDEX CONCURRENTLY)
DROP INDEX
Other great feature is that most DDL statements are transactional, so you could do an entire migration within a SQL transaction, and if something goes wrong, the entire thing gets rolled back.
I wrote this a little bit ago, perhaps it can shed some more insight on the other merits.
Since you asked about other databases, here's some information about Oracle.
Adding a NULL column to an Oracle table is a very quick operation as it only updates the data dictionary. This holds an exclusive lock on the table for a very short period of time. It will however, invalidate any depedant stored procedures, views, triggers, etc. These will get recompiled automatically.
From there if necessary you can create index using the ONLINE clause. Again, only very short data dictionary locks. It'll read the whole table looking for things to index, but does not block anyone while doing this.
If you need to add a foreign key, you can do this and get Oracle to trust you that the data is correct. Otherwise it needs to read the whole table and validate all the values which can be slow (create your index first).
If you need to put a default or calculated value into every row of the new column, you'll need to run a massive update or perhaps a little utility program that populates the new data. This can be slow, especially if the rows get alot bigger and no longer fit in their blocks. Locking can be managed during this process. Since the old versino of your application, which is still running, does not know about this column you might need a sneaky trigger or to specify a default.
From there, you can do a switcharoo on your application servers to the new version of the code and it'll keep running. Drop your sneaky trigger.
Alternatively, you can use DBMS_REDEFINITION which is a black box designed to do this sort of thing.
All this is so much bother to test, etc that we just have an early Sunday morning outage whenever we release a major version.
If you cannot afford downtime for your database when doing application updates you should consider maintaining a two-node cluster for high availability. With a simple replication setup, you could do almost fully online structural changes like the one you suggest:
wait for all changes to be replicated on a passive slave
change the passive slave to be the active master
do the structural changes to the old master
replicate changes back from the new master to the old master
do the master swapping again and the new app deployment simultaneously
It is not always easy but it works, usually with 0 downtime! The second node does not have to be only passive one, it can be used for testing, doing statistics or as a fallback node.
If you do not have infrastructure replication can be set up within a single machine (with two instances of MySQL).
Nope. If you are using MyISAM tables, to my best understanding they only do table locks - there are no record locks, they just try to keep everything hyperfast through simplicity. (Other MySQL tables operate differently.) In any case, you can copy the table to another table, alter it, and then switch them, updating for differences.
This is such a massive alteration that I doubt any DBMS would support it. It's considered a benefit to be able to do it with data in the table in the first place.
Temporary solution...
Other solution could be, add a another table with primary key of the original table, along with your new column.
Populate your primary key onto the new table and populate values for new column in your new table, and modify your query to join this table for select operations and you also need to insert, update separately for this column value.
When you able to get downtime, you can alter the original table, modify your DML queries and drop your new table created earlier
Else, you may go for clustering method, replication, pt-online-schema tool from percona
You should definitely try pt-online-schema-change. I have been using this tool to do migrations on AWS RDS with multiple slaves and it has worked very well for me. I wrote an elaborate blog post on how to do that which might be helpful for you.
Blog: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/
Using the Innodb plugin, ALTER TABLE statements which only add or drop secondary indexes can be done "quickly", i.e. without rebuilding the table.
Generally speaking however, in MySQL, any ALTER TABLE involves rebuilding the entire table which can take a very long time (i.e. if the table has a useful amount of data in it).
You really need to design your application so that ALTER TABLE statements do not need to be done regularly; you certainly don't want any ALTER TABLE done during normal running of the application unless you're prepared to wait or you're altering tiny tables.
I would recommend one of two approaches:
Design your database tables with the potential changes in mind. For example, I've worked with Content Management Systems, which change data fields in content regularly. Instead of building the physical database structure to match the initial CMS field requirements, it is much better to build in a flexible structure. In this case, using a blob text field (varchar(max) for example) to hold flexible XML data. This makes structural changes very less frequent. Structural changes can be costly, so there is a benefit to cost here as well.
Have system maintenance time. Either the system goes offline during changes (monthly, etc), and the changes are scheduled during the least heavily trafficked time of the day (3-5am, for example). The changes are staged prior to production rollout, so you will have a good fixed window estimate of downtime.
2a. Have redundant servers, so that when the system has downtime, the whole site does not go down. This would allow you to "roll" your updates out in a staggered fashion, without taking the whole site down.
Options 2 and 2a may not be feasible; they tend to be only for larger sites/operations. They are valid options, however, and I have personally used all of the options presented here.
If anyone is still reading this or happens to come here, this is the big benefit of using a NoSQL database system like mongodb. I had the same issue dealing with altering the table to either add columns for additional features or indexes on a large table with millions of rows and high writes. It would end up locking for a very long time so doing this on the LIVE database would frustrate our users. On small tables you can get away with it.
I hate the fact that we have to "design our tables to avoid altering them". I just don't think that works in today's website world. You can't predict how people will use your software that's why you rapidly change things based on user feedback. With mongodb, you can add "columns" at will with no downtime. You don't really even add them, you just insert data with new columns and it does it automatically.
Worth checking out: www.mongodb.com
In general, the answer is going to be "No". You're changing the structure of the table which potentially will require a lot of updates" and I definitely agree with that. If you expect to be doing this often, then I'll offer an alternative to "dummy" columns - use VIEWs instead of tables for SELECTing data. IIRC, changing the definition of a view is relatively lightweight and the indirection through a view is done when the query plan is compiled. The expense is that you would have to add the column to a new table and make the view JOIN in the column.
Of course this only works if you can use foreign keys to perform cascading of deletes and whatnot. The other bonus is that you can create a new table containing a combination of the data and point the view to it without disturbing client usage.
Just a thought.
The difference between Postgres and MySQL in this regard is that in Postgres it doesn't re-creates a table, but modifies data dictionary which is similar to Oracle. Therefore, the operation is fast, while it's still requires to allocate an exclusive DDL table lock for very short time as stated above by others.
In MySQL the operation will copy data to a new table while blocking transactions, which has been main pain for MySQL DBAs prior to v. 5.6.
The good news is that since MySQL 5.6 release the restriction has been mostly lifted and you now can enjoy the true power of the MYSQL DB.
As SeanDowney has mentioned, pt-online-schema-change is one of the best tools to do what you have described in the question here. I recently did a lot of schema changes on a live DB and it went pretty well. You can read more about it on my blog post here: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/.
Dummy columns are a good idea if you can predict their type (and make them nullable). Check how your storage engine handles nulls.
MyISAM will lock everything if you even mention a table name in passing, on the phone, at the airport. It just does that...
That being said, locks aren't really that big a deal; as long as you are not trying to add a default value for the new column to every row, but let it sit as null, and your storage engine is smart enough not to go writing it, you should be ok with a lock that is only held long enough to update the metadata. If you do try to write a new value, well, you are toast.
TokuDB can add/drop columns and add indexes "hot", the table is fully available throughout the process. It is available via www.tokutek.com
Not really.
You ARE altering the underlying structure of the table, after all, and that's a bit of information that's quite important to the underlying system. You're also (likely) moving much of the data around on disk.
If you plan on doing this a lot, you're better off simply padding the table with "dummy" columns that are available for future use.