ALTER TABLE without locking the table? - sql

When doing an ALTER TABLE statement in MySQL, the whole table is read-locked (allowing concurrent reads, but prohibiting concurrent writes) for the duration of the statement. If it's a big table, INSERT or UPDATE statements could be blocked for a looooong time. Is there a way to do a "hot alter", like adding a column in such a way that the table is still updatable throughout the process?
Mostly I'm interested in a solution for MySQL but I'd be interested in other RDBMS if MySQL can't do it.
To clarify, my purpose is simply to avoid downtime when a new feature that requires an extra table column is pushed to production. Any database schema will change over time, that's just a fact of life. I don't see why we should accept that these changes must inevitably result in downtime; that's just weak.

The only other option is to do manually what many RDBMS systems do anyway...
- Create a new table
You can then copy the contents of the old table over a chunk at a time. Whilst always being cautious of any INSERT/UPDATE/DELETE on the source table. (Could be managed by a trigger. Although this would cause a slow down, it's not a lock...)
Once finished, change the name of the source table, then change the name of the new table. Preferably in a transaction.
Once finished, recompile any stored procedures, etc that use that table. The execution plans will likely no longer be valid.
EDIT:
Some comments have been made about this limitation being a bit poor. So I thought I'd put a new perspective on it to show why it's how it is...
Adding a new field is like changing one field on every row.
Field Locks would be much harder than Row locks, never mind table locks.
You're actually changing the physical structure on the disk, every record moves.
This really is like an UPDATE on the Whole table, but with more impact...

Percona makes a tool called pt-online-schema-change that allows this to be done.
It essentially makes a copy of the table and modifies the new table. To keep the new table in sync with the original it uses triggers to update. This allows the original table to be accessed while the new table is prepared in the background.
This is similar to Dems suggested method above, but this does so in an automated fashion.
Some of their tools have a learning curve, namely connecting to the database, but once you have that down, they are great tools to have.
Ex:
pt-online-schema-change --alter "ADD COLUMN c1 INT" D=db,t=numbers_are_friends

This question from 2009. Now MySQL offers a solution:
Online DDL (Data Definition Language)
A feature that improves the performance, concurrency, and availability
of InnoDB tables during DDL (primarily ALTER TABLE) operations. See
Section 14.11, “InnoDB and Online DDL” for details.
The details vary according to the type of operation. In some cases,
the table can be modified concurrently while the ALTER TABLE is in
progress. The operation might be able to be performed without doing a
table copy, or using a specially optimized type of table copy. Space
usage is controlled by the innodb_online_alter_log_max_size
configuration option.
It lets you adjust the balance between performance and concurrency during the DDL operation, by choosing whether to block access to the table entirely (LOCK=EXCLUSIVE clause), allow queries but not DML (LOCK=SHARED clause), or allow full query and DML access to the table (LOCK=NONE clause). When you omit the LOCK clause or specify LOCK=DEFAULT, MySQL allows as much concurrency as possible depending on the type of operation.
Performing changes in-place where possible, rather than creating a new copy of the table, avoids temporary increases in disk space usage and I/O overhead associated with copying the table and reconstructing secondary indexes.
see MySQL 5.6 Reference Manual -> InnoDB and Online DDL for more info.
It seems that online DDL also available in MariaDB
Alternatively you can use ALTER ONLINE TABLE to ensure that your ALTER
TABLE does not block concurrent operations (takes no locks). It is
equivalent to LOCK=NONE.
MariaDB KB about ALTER TABLE

See Facebook's online schema change tool.
http://www.facebook.com/notes/mysql-at-facebook/online-schema-change-for-mysql/430801045932
Not for the faint of heart; but it will do the job.

I recommend Postgres if that's an option. With postgres there is essentially no downtime with the following procedures:
ALTER TABLE ADD COLUMN (if the column can be NULL)
ALTER TABLE DROP COLUMN
CREATE INDEX (must use CREATE INDEX CONCURRENTLY)
DROP INDEX
Other great feature is that most DDL statements are transactional, so you could do an entire migration within a SQL transaction, and if something goes wrong, the entire thing gets rolled back.
I wrote this a little bit ago, perhaps it can shed some more insight on the other merits.

Since you asked about other databases, here's some information about Oracle.
Adding a NULL column to an Oracle table is a very quick operation as it only updates the data dictionary. This holds an exclusive lock on the table for a very short period of time. It will however, invalidate any depedant stored procedures, views, triggers, etc. These will get recompiled automatically.
From there if necessary you can create index using the ONLINE clause. Again, only very short data dictionary locks. It'll read the whole table looking for things to index, but does not block anyone while doing this.
If you need to add a foreign key, you can do this and get Oracle to trust you that the data is correct. Otherwise it needs to read the whole table and validate all the values which can be slow (create your index first).
If you need to put a default or calculated value into every row of the new column, you'll need to run a massive update or perhaps a little utility program that populates the new data. This can be slow, especially if the rows get alot bigger and no longer fit in their blocks. Locking can be managed during this process. Since the old versino of your application, which is still running, does not know about this column you might need a sneaky trigger or to specify a default.
From there, you can do a switcharoo on your application servers to the new version of the code and it'll keep running. Drop your sneaky trigger.
Alternatively, you can use DBMS_REDEFINITION which is a black box designed to do this sort of thing.
All this is so much bother to test, etc that we just have an early Sunday morning outage whenever we release a major version.

If you cannot afford downtime for your database when doing application updates you should consider maintaining a two-node cluster for high availability. With a simple replication setup, you could do almost fully online structural changes like the one you suggest:
wait for all changes to be replicated on a passive slave
change the passive slave to be the active master
do the structural changes to the old master
replicate changes back from the new master to the old master
do the master swapping again and the new app deployment simultaneously
It is not always easy but it works, usually with 0 downtime! The second node does not have to be only passive one, it can be used for testing, doing statistics or as a fallback node.
If you do not have infrastructure replication can be set up within a single machine (with two instances of MySQL).

Nope. If you are using MyISAM tables, to my best understanding they only do table locks - there are no record locks, they just try to keep everything hyperfast through simplicity. (Other MySQL tables operate differently.) In any case, you can copy the table to another table, alter it, and then switch them, updating for differences.
This is such a massive alteration that I doubt any DBMS would support it. It's considered a benefit to be able to do it with data in the table in the first place.

Temporary solution...
Other solution could be, add a another table with primary key of the original table, along with your new column.
Populate your primary key onto the new table and populate values for new column in your new table, and modify your query to join this table for select operations and you also need to insert, update separately for this column value.
When you able to get downtime, you can alter the original table, modify your DML queries and drop your new table created earlier
Else, you may go for clustering method, replication, pt-online-schema tool from percona

You should definitely try pt-online-schema-change. I have been using this tool to do migrations on AWS RDS with multiple slaves and it has worked very well for me. I wrote an elaborate blog post on how to do that which might be helpful for you.
Blog: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/

Using the Innodb plugin, ALTER TABLE statements which only add or drop secondary indexes can be done "quickly", i.e. without rebuilding the table.
Generally speaking however, in MySQL, any ALTER TABLE involves rebuilding the entire table which can take a very long time (i.e. if the table has a useful amount of data in it).
You really need to design your application so that ALTER TABLE statements do not need to be done regularly; you certainly don't want any ALTER TABLE done during normal running of the application unless you're prepared to wait or you're altering tiny tables.

I would recommend one of two approaches:
Design your database tables with the potential changes in mind. For example, I've worked with Content Management Systems, which change data fields in content regularly. Instead of building the physical database structure to match the initial CMS field requirements, it is much better to build in a flexible structure. In this case, using a blob text field (varchar(max) for example) to hold flexible XML data. This makes structural changes very less frequent. Structural changes can be costly, so there is a benefit to cost here as well.
Have system maintenance time. Either the system goes offline during changes (monthly, etc), and the changes are scheduled during the least heavily trafficked time of the day (3-5am, for example). The changes are staged prior to production rollout, so you will have a good fixed window estimate of downtime.
2a. Have redundant servers, so that when the system has downtime, the whole site does not go down. This would allow you to "roll" your updates out in a staggered fashion, without taking the whole site down.
Options 2 and 2a may not be feasible; they tend to be only for larger sites/operations. They are valid options, however, and I have personally used all of the options presented here.

If anyone is still reading this or happens to come here, this is the big benefit of using a NoSQL database system like mongodb. I had the same issue dealing with altering the table to either add columns for additional features or indexes on a large table with millions of rows and high writes. It would end up locking for a very long time so doing this on the LIVE database would frustrate our users. On small tables you can get away with it.
I hate the fact that we have to "design our tables to avoid altering them". I just don't think that works in today's website world. You can't predict how people will use your software that's why you rapidly change things based on user feedback. With mongodb, you can add "columns" at will with no downtime. You don't really even add them, you just insert data with new columns and it does it automatically.
Worth checking out: www.mongodb.com

In general, the answer is going to be "No". You're changing the structure of the table which potentially will require a lot of updates" and I definitely agree with that. If you expect to be doing this often, then I'll offer an alternative to "dummy" columns - use VIEWs instead of tables for SELECTing data. IIRC, changing the definition of a view is relatively lightweight and the indirection through a view is done when the query plan is compiled. The expense is that you would have to add the column to a new table and make the view JOIN in the column.
Of course this only works if you can use foreign keys to perform cascading of deletes and whatnot. The other bonus is that you can create a new table containing a combination of the data and point the view to it without disturbing client usage.
Just a thought.

The difference between Postgres and MySQL in this regard is that in Postgres it doesn't re-creates a table, but modifies data dictionary which is similar to Oracle. Therefore, the operation is fast, while it's still requires to allocate an exclusive DDL table lock for very short time as stated above by others.
In MySQL the operation will copy data to a new table while blocking transactions, which has been main pain for MySQL DBAs prior to v. 5.6.
The good news is that since MySQL 5.6 release the restriction has been mostly lifted and you now can enjoy the true power of the MYSQL DB.

As SeanDowney has mentioned, pt-online-schema-change is one of the best tools to do what you have described in the question here. I recently did a lot of schema changes on a live DB and it went pretty well. You can read more about it on my blog post here: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/.

Dummy columns are a good idea if you can predict their type (and make them nullable). Check how your storage engine handles nulls.
MyISAM will lock everything if you even mention a table name in passing, on the phone, at the airport. It just does that...
That being said, locks aren't really that big a deal; as long as you are not trying to add a default value for the new column to every row, but let it sit as null, and your storage engine is smart enough not to go writing it, you should be ok with a lock that is only held long enough to update the metadata. If you do try to write a new value, well, you are toast.

TokuDB can add/drop columns and add indexes "hot", the table is fully available throughout the process. It is available via www.tokutek.com

Not really.
You ARE altering the underlying structure of the table, after all, and that's a bit of information that's quite important to the underlying system. You're also (likely) moving much of the data around on disk.
If you plan on doing this a lot, you're better off simply padding the table with "dummy" columns that are available for future use.

Related

Refactoring Auto-increment ids to GUIDs in **SQL DB

Jeff and others have convinced me that GUIDs are preferable to auto-increment ids. I have a Postgres DB that is indexed by auto-increment ids so I'd like to "refactor" the indexes to UUIDs. Is there some general (or specific) approach to doing this besides writing functions that traverse the tables, and check for index matches across tables?
Update
Note: the database is not currently in production, so performance and transactional integrity are non-issues.
I'm not able to find anything that will do this automatically for you, so it looks like it's up to you to do it. Good thing the world still needs database developers, eh?
The best way, arguably, is to have the entire change scripted out. The best way to create that script is probably with another script or tool (code that writes code), which doesn't seem to be available for this particular scenario. Of course each of these adds another layer of software which must be constructed and tested. If I thought that I would want to repeat this process some time, or needed some level of audit trail (e.g. change scripts), I would probably bite the bullet and write the script that writes this script.
If this really is just a one-shot deal, and you can prevent DB access while you're doing it, then it might save time and effort to just manually make the changes, sort of like when you initially develop a database. By this, I mean adding UUID columns via your preferred method (diagrammer, SQL DDL, etc.), filling them with data (probably with ad-hoc SQL DML), setting keys and constraints, and then eventually removing the old foreign keys and columns (again, using whatever method you like).
If you have multiple environments (dev, test, prod), you can potentially do this in dev and then use a DB compare tool to script the changes, though you'll need the new FK values scripted.
An example
Here is a working script example on SQL Fiddle, though it's in SQL Server (my easiest DB), just to give you an idea about what you'll have to script (unfortunately, not how). It's still not completely transactionally consistent, as someone could modify something during one particular operation.
I realize this isn't by any means a complete answer, so feel free to vote me down (and provide a better answer).
Good luck, this is actually a fun problem.

Transferring data from one SQL table layout to a 'new & improved' one

The project I work on has undergone a transformation at the database level. For the better, about 40% of the SQL layout has been changed. Some columns were eliminated, others moved. I am now tasked with developing a data migration strategy.
What migration methods, even tools are available so that I don't have to figure out each every individual dependency and manually script a key change when their IDs (for instance) change.
I realize this question is a bit obtuse and open ended, but I assume others have had to do this before and I would appreciate any advice.
I'm on MS SQL Server 2008
#OMG Ponies Not obtuse but vague:
Great point. I guess this helps me reconsider what I am asking, at least make it more specific. How do you insert from multiple tables to multiple tables keeping the relationships established by the foreign keys intact? I now realize I could drop the ID key constraint during the insert and re-enable it after, but I guess I have to figure out what depends on what myself and make sure it goes smoothly.
I'll start there, but will leave this open in case anyone else has other recommendation.
You should create an upgrade script that morphs the current schema into the v. next schema, applying appropriate operations (alter table, select into, update, delete etc). While this may seem tedious, is the only process that will be testable: start from a backup of the current db, apply the upgrade script, test the result db for conformance with the desired schema. You can test and debug your upgrade script until is hammered into correctness. You can test it on a real data size so that you get a correct estimate of downtime due to size-of-data operations.
While there are out there tools that can copy data or transforms schema(s) (like SQL Compare) I believe approaching this as a development project, with a script deliverable that can be tested repeatedly and validated, is a much saner approach.
In future you can account for this upgrade step in your development and start with it, rather than try to squeeze it in at the end.
there are tons of commercial tools around that claim to solve this -> i wouldn't buy that...
I think your best bet is to model domain classes that represent your data and write adapters that read in/serialize to the old/new schemas.
If you haven't got a model of your domain, you should build one now.
ID's will change, so ideally they should not carry any meaning to user's of your database.

Upgrade strategies for bad DB schema designs

I've shown up at a new job and discovered database which is in dire need of some help. There are many many things wrong with it, including
No foreign keys...anywhere. They're faked by using ints and managing the relationship in code.
Practically every field can be NULL, which isn't really true
Naming conventions for tables and columns are practically non-existent
Varchars which are storing concatenated strings of relational information
Folks can argue, "It works", which it is. But moving forward, it's a total pain to manage all of this with code and opens us up to bugs IMO. Basically, the DB is being used as a flat file since it's not doing a whole lot of work.
I want to fix this. The issues I see now are:
We have a lot of data (migration, possibly tricky)
All of the DB logic is in code (with migration comes big code changes)
I'm also tempted to do something "radical" like moving to a schema-free DB.
What are some good strategies when faced with an existing DB built upon a poorly designed schema?
Enforce Foreign Keys: If a relationship exists in the domain, then it should have a Foreign Key.
Renaming existing tables/columns is fraught with danger, especially if there are many systems accessing the Database directly. Gotchas include tasks that run only periodically; these are often missed.
Of Interest: Scott Ambler's article: Introduction To Database Refactoring
and Catalog of Database Refactorings
Views are commonly used to transition between changing data models because of the encapsulation. A view looks like a table, but does not exist as a finite object in the database - you can change what column is being returned for a given column alias as desired. This allows you to setup your codebase to use a view, so you can move from the old table structure to the new one without the application needing to be updated. But it means the view has to return the data in the existing format. For example - your current data model has:
SELECT t.column --a list of concatenated strings, assuming comma separated
FROM TABLE t
...so the first version of the view would be the query above, but once you created the new table that uses 3NF, the query for the view would use:
SELECT GROUP_CONCAT(t.column SEPARATOR ',')
FROM NEW_TABLE t
...and the application code would never know that anything changed.
The problem with MySQL is that the view support is limited - you can't use variables within it, nor can they have subqueries.
The reality to the changes you wish to make is effectively rewriting the application from the ground up. Moving logic from the codebase into the data model will drastically change how the application gets the data. Model-View-Controller (MVC) is ideal to implement with changes like these, to minimize the cost of future changes like these.
I'd say leave it alone until you really understand it. Then make sure you don't start with one of the Things You Should Never Do.
Read Scott Ambler's book on Refactoring Databases. It covers a good many techniques for how to go about improving a database - including the transitional measures needed to allow both old and new programs to work with the changing design.
Create a completely new schema and make sure that it is fully normalized and contains any unique, check and not null constraints etc that are required and that appropriate data types are used.
Prepopulate each table that fills the parent role in a foreign key relationship with a single 'Unknown' record.
Create an ETL (Extract Transform Load) process (I can recommend SSIS (SQL Server Integration Services) but there are plenty of others) that you can use to refill the new schema from the existing one on a regular basis. Use the 'Unknown' record as the parent of any orphaned records - there will be plenty ;). You will need to put some thought into how you will consolidate duplicate records - this will probably need to be on a case by case basis.
Use as many iterations as are necessary to refine your new schema (ensure that the ETL Process is maintained and run regularly).
Create views over the new schema that match the existing schema as closely as possible.
Incrementally modify any clients to use the new schema making temporary use of the views where necessary. You should be able to gradually turn off parts of the ETL process and eventually disable it completely.
First see how bad the code is related to the DB if it is all mixed in no DAO layer you shouldn't think about a rewrite but if there is a DAO layer then it would be time to rewrite that layer and DB along with it. If possible make the migration tool based on using the two DAOs.
But my guess is there is no DAO so you need to find what areas of the code you are going to be changing and what parts of the DB that relates to hopefully you can cut it up into smaller parts that can be updated as you maintain. Biggest deal is to get FKs in there and start checking for proper indexes there is a good chance they aren't being done correctly.
I wouldn't worry too much about naming until the rest of the db is under control. As for the NULLs if the program chokes on a value being NULL don't let it be NULL but if the program can handle it I wouldn't worry about it at this point in the future if it is doing a default value move that to the DB but that is way down the line from the sound of things.
Do something about the Varchars sooner rather then later. If anything make that the first pure background fix to the program.
The other thing to do is estimate the effort of each areas change and then add that price to the cost of new development on that section of code. That way you can fix the parts as you add new features.

Do you put your indexes in source control?

And how do you keep them in synch between test and production environments?
When it comes to indexes on database tables, my philosophy is that they are an integral part of writing any code that queries the database. You can't introduce new queries or change a query without analyzing the impact to the indexes.
So I do my best to keep my indexes in synch betweeen all of my environments, but to be honest, I'm not doing very well at automating this. It's a sort of haphazard, manual process.
I periodocally review index stats and delete unnecessary indexes. I usually do this by creating a delete script that I then copy back to the other environments.
But here and there indexes get created and deleted outside of the normal process and it's really tough to see where the differences are.
I've found one thing that really helps is to go with simple, numeric index names, like
idx_t_01
idx_t_02
where t is a short abbreviation for a table. I find index maintenance impossible when I try to get clever with all the columns involved, like,
idx_c1_c2_c5_c9_c3_c11_5
It's too hard to differentiate indexes like that.
Does anybody have a really good way to integrate index maintenance into source control and the development lifecycle?
Indexes are a part of the database schema and hence should be source controlled along with everything else. Nobody should go around creating indexes on production without going through the normal QA and release process- particularly performance testing.
There have been numerous other threads on schema versioning.
The full schema for your database should be in source control right beside your code. When I say "full schema" I mean table definitions, queries, stored procedures, indexes, the whole lot.
When doing a fresh installation, then you do:
- check out version X of the product.
- from the "database" directory of your checkout, run the database script(s) to create your database.
- use the codebase from your checkout to interact with the database.
When you're developing, every developer should be working against their own private database instance. When they make schema changes they checkin a new set of schema definition files that work against their revised codebase.
With this approach you never have codebase-database sync issues.
Yes, any DML or DDL changes are scripted and checked in to source control, mostly thru activerecord migrations in rails. I hate to continually toot rails' horn, but in many years of building DB-based systems I find the migration route to be so much better than any home-grown system I've used or built.
However, I do name all my indexes (don't let the DBMS come up with whatever crazy name it picks). Don't prefix them, that's silly (because you have type metadata in sysobjects, or in whatever db you have), but I do include the table name and columns, e.g. tablename_col1_col2.
That way if I'm browsing sysobjects I can easily see the indexes for a particular table (also it's a force of habit, wayyyy back in the day on some dBMS I used, index names were unique across the whole DB, so the only way to ensure that is to use unique names).
I think there are two issues here: the index naming convention, and adding database changes to your source control/lifecycle. I'll tackle the latter issue.
I've been a Java programmer for a long time now, but have recently been introduced to a system that uses Ruby on Rails for database access for part of the system. One thing that I like about RoR is the notion of "migrations". Basically, you have a directory full of files that look like 001_add_foo_table.rb, 002_add_bar_table.rb, 003_add_blah_column_to_foo.rb, etc. These Ruby source files extend a parent class, overriding methods called "up" and "down". The "up" method contains the set of database changes that need to be made to bring the previous version of the database schema to the current version. Similarly, the "down" method reverts the change back to the previous version. When you want to set the schema for a specific version, the Rails migration scripts check the database to see what the current version is, then finds the .rb files that get you from there up (or down) to the desired revision.
To make this part of your development process, you can check these into source control, and season to taste.
There's nothing specific or special about Rails here, just that it's the first time I've seen this technique widely used. You can probably use pairs of SQL DDL files, too, like 001_UP_add_foo_table.sql and 001_DOWN_remove_foo_table.sql. The rest is a small matter of shell scripting, an exercise left to the reader.
I always source-control SQL (DDL, DML, etc). Its code like any other. Its good practice.
I am not sure indexes should be the same across different environments since they have different data sizes. Unless your test and production environments have the same exact data, the indexes would be different.
As to whether they belong in source control, am not really sure.
I do not put my indexes in source control but the creation script of the indexes. ;-)
Index-naming:
IX_CUSTOMER_NAME for the field "name" in the table "customer"
PK_CUSTOMER_ID for the primary key,
UI_CUSTOMER_GUID, for the GUID-field of the customer which is unique (therefore the "UI" - unique index).
On my current project, I have two things in source control - a full dump of an empty database (using pg_dump -c so it has all the ddl to create tables and indexes) and a script that determines what version of the database you have, and applies alters/drops/adds to bring it up to the current version. The former is run when we're installing on a new site, and also when QA is starting a new round of testing, and the latter is run at every upgrade. When you make database changes, you're required to update both of those files.
Using a grails app the indexes are stored in source control by default since you are defining the index definition inside of a file that represents your domain object. Just offering the 'Grails' perspective as an FYI.

Effectively transforming data from one SQL database to another in live environment

We have a bit of a messy database situation.
Our main back-office system is written in Visual Fox Pro with local data (yes, I know!)
In order to effectively work with the data in our websites, we have chosen to regularly export data to a SQL database. However the process that does this basically clears out the tables each time and does a re-insert.
This means we have two SQL databases - one that our FoxPro export process writes to, and another that our websites read from.
This question is concerned with the transform from one SQL database to the other (SqlFoxProData -> SqlWebData).
For a particular table (one of our main application tables), because various data transformations take places during this process, it's not a straightforward UPDATE, INSERT and DELETE statements using self-joins, but we're having to use cursors instead (I know!)
This has been working fine for many months but now we are starting to hit upon performance problems when an update is taking place (this can happen regularly during the day)
Basically when we are updating SqlWebData.ImportantTable from SqlFoxProData.ImportantTable, it's causing occasional connection timeouts/deadlocks/other problems on the live websites.
I've worked hard at optimising queries, caching etc etc, but it's come to a point where I'm looking for another strategy to update the data.
One idea that has come to mind is to have two copies of ImportantTable (A and B), some concept of which table is currently 'active', updating the non-active table, then switching the currenly actice table
i.e. websites read from ImportantTableA whilst we're updating ImportantTableB, then we switch websites to read from ImportantTableB.
Question is, is this feasible and a good idea? I have done something like it before but I'm not convinced it's necessarily good for optimisation/indexing etc.
Any suggestions welcome, I know this is a messy situation... and the long term goal would be to get our FoxPro application pointing to SQL.
(We're using SQL 2005 if it helps)
I should add that data consistency isn't particularly important in the instance, seeing as the data is always slightly out of date
There are a lot of ways to skin this cat.
I would attack the locking issues first. It is extremely rare that I would use CURSORS, and I think improving the performance and locking behavior there might resolve a lot of your issues.
I expect that I would solve it by using two separate staging tables. One for the FoxPro export in SQL and one transformed into the final format in SQL side-by-side. Then either swapping the final for production using sp_rename, or simply using 3 INSERT/UPDATE/DELETE transactions to apply all changes from the final table to production. Either way, there is going to be some locking there, but how big are we talking about?
You should be able to maintain one db for the website and just replicate to that table from the other sql db table.
This is assuming that you do not update any data from the website itself.
"For a particular table (one of our main application tables), because various data transformations take places during this process, it's not a straightforward UPDATE, INSERT and DELETE statements using self-joins, but we're having to use cursors instead (I know!)"
I cannot think of a case where I would ever need to perform an insert, update or delete using a cursor. If you can write the select for the cursor, you can convert it into an insert, update or delete. You can join to other tables in these statements and use the case stament for conditional processing. Taking the time to do this in a set -based fashion may solve your problem.
One thing you may consider if you have lots of data to move. We occassionally create a view to the data we want and then have two tables - one active and one that data will be loaded into. When the data is finsihed loading, as part of your process run a simple command to switch the table the view uses to the one you just finshed loading to. That way the users are only down for a couple of seconds at most. You won't create locking issues where they are trying to access data as you are loading.
You might also look at using SSIS to move the data.
Do you have the option of making the updates more atomic, rather than the stated 'clear out and re-insert'? I think Visual Fox Pro supports triggers, right? For your key tables, can you add a trigger to the update/insert/delete to capture the ID of records that change, then move (or delete) just those records?
Or how about writing all changes to an offline database, and letting SQL Server replication take care of the sync?
[sorry, this would have been a comment, if I had enough reputation!]
Based on your response to Ernie above, you asked how you replicate databases. Here is Microsoft's how-to about replication in SQL2005.
However, if you're asking about replication and how to do it, it indicates to me that you are a little light in experience for SQL server. That being said, it's fairly easy to muck things up and while I'm all for learning by experience, if this is mission critical data, you might be better off hiring a DBA or at the very least, testing the #$##$% out of this before you actually implement it.