How to optimally clear down multiple tables in Oracle SQL? - sql

I have a script that will be clearing down 10+ tables that will be run frequently.
However, 3 of these tables have tens of thousands of records and this causes the script to take over 2 and a half hours to run. This isn't good enough because it will be run during the workday to test a suite of scripts that are migrating data.
We have tried a few different approaches:
DELETE FROM x
Requires nulling some foreign keys but this is the most reliable way we have found. However, like I've mentioned, the runtime is far too long for our purposes.
A mixture of DELETE FROM x and TRUNCATE TABLE x
This is the fastest way we've found. Deleting tables as normal and then truncating the problematic tables. But this requires an explicit order to clearing down the tables. This is awkward in and of itself but our DB schema is also changing frequently which makes this cumbersome.
Disable constraints and then TRUNCATE TABLE x
We have also began writing a script to disable all PK/FK constraints related to the tables we would like to empty, truncate those tables and their child/sibling/parent tables, and then reenable those constraints. This seems like a good approach but is likely overengineering.
There is one further approach that I have considered but I'm not sure would it make a difference. This would be removing all FK constraints and adding a ON DELETE CASCADE to them. This might hopefully optimise approach 1. But I said I would throw it to SO first and see if anybody else has had to deal with this before.

The TRUNCATE statement is way faster than the DELETE statement to empty a table.
In Oracle (as in PostgreSQL), there is a CASCADE option on TRUNCATE statement (as on DELETE statement), which might help about the "overengineering" part (when you want to disable FK, then reable them).
TRUNCATE TABLE x CASCADE;

Truncating is better than deleting, especially for large tables. As you already found out, it is faster, but it also resets the high water mark so queries you run afterwards will run better.
Whichever option you choose (2 or 3) is OK with me, as long as you pick the one you prefer better. Knowing the order matters, so - yes, that's kind of tedious.
I wouldn't remove foreign key constraints; they exist for reason. If you do remove them, don't forget to recreate them afterwards.
You didn't say whether there are other tables involved in that schema. If not, perhaps you could even drop/create user and then recreate tables within the same script. Drawback: "normal" users can't do that, you'll need someone with DBA privileges.

Related

One single delete statement query takes lots of time?

For example:
delete from dk_lor where cyt=9 and df=6 and l_t='S'
And the table dk_lor is also an independent one. I haven't used any kind of trigger to it and also having index for the columns but still it takes much time to delete the records.
Could any one advise me what to do now?
"Independent One" means neither other table depends upon it nor it depends to others(No Referential Integrity concept).

Speeding up deletes that have joins

i am running a stored procedure to delete data from two tables:
delete from TESTING_testresults
from TESTING_testresults
inner join TESTING_QuickLabDump
on TESTING_QuickLabDump.quicklabdumpid = TESTING_TestResults.quicklabdumpid
where TESTING_quicklabdump.[Specimen ID]=#specimen
delete from TESTING_QuickLabDump
from TESTING_Quicklabdump
where [specimen id]=#specimen
one table is 60m rows and the other is about 2m rows
the procedure takes about 3 seconds to run.
is there any way i can speed this up? perhaps using EXISTS?
meaning IF EXISTS...THEN DELETE - because the delete should not be occurring every single time
something like this
if #specimen exists in TESTING_QuickLabDump then do the procedure with the two deletes
thank you !!!
Rewriting the query probably wont help speeding this up. Use the profiler to find out which parts of the query are slow. For this, make it profiler output the execution plan. Then, try adding appropriate indexes. Perhaps one or both tables could use an index over [specimen id].
For a table with 60 mil rows I would definitely look into partitioning the data horizontally and/or vertically. If it's time-sensitive data then you ought to be able to move old data into a history table. That's usually the first and most obvious thing people do so I would imagine if that were a possibility you would have already done it.
If there are many columns then it would definitely benefit you to denormalize the data into multiple tables. If you did this, I would suggest renaming the tables and creating a view of all the partitioned tables named after the original table. Doing that should ensure existing code isn't broken.
If you 'really' want to fine tune the speed then you should look into getting a faster hard drive and learn a little about hard drives work. Whether the data is stored towards the inner or outer section of the hd will affect speed of access slightly for example. And solid state hard drives have come a long way so you might look into getting one of those.
Beside indexing "obvious" fields, also look in your database schema and check if you have any FOREIGN KEYs whose ON DELETE CASCADE or SET NULL might be triggered by your delete (unlike Oracle, MS SQL Server will tend to show these in the execution plan). Fortunately, this is usually fairly easy to fix by indexing the child endpoint of the FOREIGN KEY.
Also check if you have any expensive triggers.

Implementing soft delete with minimal impact on performance and code

There are some similar questions on the topic, but they are not really helping me.
I want to implement a soft delete feature like on StackOverflow, where items are not really deleted, but just hidden. I am using a SQL database. Here are 3 options:
Add a is_deleted boolean field.
Advantages: Simple.
Disadvantages: No date record. Forces me to add a is_deleted = 0 in every query.
Add a deleted_date date field. This is set to NULL if it's not deleted.
Advantages: Has date.
Disadvantages: Still cluttering my queries.
For both of the above
It will also impact performance because there are all these useless rows. They still have to be maintained in indexes. Also an index on the deleted column won't help when fetching non-deleted (the majority) of the rows. Full table scan is needed.
Another option is to create a separate table to hold deleted items:
Advantages: Improved performance when querying non-deleted rows. No need to add conditions to my queries on non-deleted rows. Easier on index maintenance.
Disadvantages: Complexity: Requires data migration for both deletion and undeletion. Need for new tables. Referential integrity is harder to handle.
Is there a better option?
I personally would base my answer off of how often you anticipate your users wanting to access that deleted data or "restore" that deleted data.
If it's often, then I would go with a "Date_Deleted" field and put a calculated "IsDeleted" in my poco in the code.
If it's never (or almost never) then a history table or deleted table is good for the benefits you explained.
I personally almost never use deleted tables (and opt for isDeleted or date_deleted) because of the potential risk to referencial integrity. You have A -> B and you remove the record from B database... You now have to manage referencial integrity because of your design choice.
If the key is numeric, I handle a "soft-delete" by negating the key. (Of course, won't work for identity keys). You don't need to change your code at all, and can easily restore the record by multiplying by -1.
Just another approach to give some thought to... If the key is alphanumeric, you can do something similar by prepending a unique "marker" characters. Since deleted records will all begin with this marker, then will end up off by themselves in the index.
In my opinion, the best way forward, when thinking about scaling and eventual table/database sizes is your third option - a separate table for deleted items. Such a table can eventually be moved to a different database to support scaling.
I believe you have listed the three most common options. As you have seen, each has advantages and disadvantages. Personally, I like taking the longer view on things.
I think your analysis of the options is good but you missed a few relevant points which I list below. Almost all implementations that I have seen use some sort of deleted or versioning field on the row as you suggest in your first two options.
Using one table with deleted flag:
If your indexes all contain the deleted flag field first and your query's mostly contain a where isdeleted=false type structure then it DOES solve you performance problems and the indexes very efficiently exclude the deleted rows. Similar logic could be used for the deleted date option.
Using two Tables
In general you need to make massive changes to reports because some reports may refer to deleted data (like old sales figures might refer to a deleted sales category). One can overcome this by creating a view which is a union of the two tables to read from and only write to the active records table.
Let's suppose we create a field called dead to mark deleted rows. We can create a index where field dead is false.
In this way, we only search non-deleted rows using the hint use index.

sql: DELETE + INSERT vs UPDATE + INSERT

A similar question has been asked, but since it always depends, I'm asking for my specific situation separately.
I have a web-site page that shows some data that comes from a database, and to generate the data from that database I have to do some fairly complex multiple joins queries.
The data is being updated once a day (nightly).
I would like to pre-generate the data for the said view to speed up the page access.
For that I am creating a table that contains exact data I need.
Question: for my situation, is it reasonable to do complete table wipe followed by insert? or should I do update,insert?
SQL wise seems like DELETE + INSERT will be easier (INSERT part is a single SQL expression).
EDIT: RDBMS: MS SQL Server 2008 Ent
TRUNCATE will be faster than delete, so if you need to empty a table do that instead
You didn't specify your RDBMS vendor but some of them also have MERGE/UPSERT commands This enables you do update the table if the data exists and insert if it doesn't
It partly depends on how the data is accessed. If you have a period of time with no (or very few) users accessing it, then there won't be much impact on the data disappearing (between the DELETE and the completion of the INSERT) for a short while.
Have you considered using a materialized view (MSSQL calls them indexed views) instead of doing it manually? This could also have other performance benefits as an indexed view gives the query optimizer more choices when its constructing execution plans for other queries that reference the table(s) in the view.
It depends on the size of the table and the recovery model on the database. If you are deleting many hundreds of thousands of records and reinstating them vs updating a small batch of a few hundred and inserting tens of rows, it will add an unnecessary size to your transaction logs. However you could use TRUNCATE to get around this as it won't affect the transaction log.
Do you have the option of a MERGE/UPSERT? If you're using MS-SQL you can use CROSS APPLY to do something similar if you don't.
One approach to handling this type of problem is to insert into new table, then do a table Rename. This will insure that all new data is present at the same time.
What if some data that was present yesterdays is not anymore? Delete may be safer or you could end up deleting some records anyway.
And in the end it doesnt really matter which way you go.
Unless on the case #kevinw mentioned
Although I fully agree with SQLMenace's answer I do would like to point out that MERGE does NOT remove unneeded records ! If you're sure that your new data will be a super-set of the existing data, then MERGE is great, otherwise you'll either need to make sure that you delete any superfluous records later on, or use the TRUNCATE + INSERT method ...
(Personally I'm still a fan of the latter as it usually is quite fast, just make sure to drop all indexes/unique constraints upfront and rebuild them one by one. This has the benefit of the INSERT transaction being smaller and the index-adding being done in (smaller) transactions again later on). (**)
(**: yes, this might be tricky on live system, but then again he already mentioned this was done during some kind of overnight anyway, I'm extrapolating there is no user-access at that time)

RENAME faster than DROP+ADD in MySQL alter table

I'm performing some MySQL table maintenance that will mean removing some redundant columns and adding some new ones.
Some of the columns to drop are of the same type as ones to add. Would the procedure be faster if I took advantage of this and reused some of the existing columns?
My rationale is that changing column names should be a simple table metadata change, whereas removing and adding columns means either finding room at the end of the file (fragmenting data) or rebuilding every row with the correct columns so that they're at the same place on the disk.
The engine in question is MyISAM and I'm not up to scratch on how exactly it'll treat this so I'd like to hear from anyone who has been in the same situation before!
Unless you have a serious issue with performance, I wouldn't take the renaming approach - because of all the dirty data you're going to leave lying around.
Also, by dropping the table, you will cause any indexes to get re-built - which is a good idea every once in a while...
Martin
I would drop the columns. You will have fragmentation either way. That should be handled in your regular maintenance plans. You could accelerate those after a large number of modification operations.
If you don't know, in Myisam table, every ALTER TABLE operation will do a copy of entire table, thus the table will be locked for the time your server needs to copy the table.
I've used that same logic, and got stung because even with changes that are supposed to not require rewriting the table (i.e. a table rename), a MySQL bug caused it to think it was a change that required rewriting the table.
If the fields you are dealing with are date, datetime or timestamp fields, you are likely to be hit by this, which means that you should just assume it has to do a full rewrite and plan that way.