Truncate All Tables under a Schema in DB2 - sql

I want to truncate all tables under a specific schema in DB2 which is worked on a Linux Server. But I have no right to ALTER TABLE to disable the foreign-key constraints.
Is there anyway to do this?
I'm considering performing a topology sort based on the constraints between tables, but it is a little bit complex.
Any good idea on this problem?

You don't say what platform you're on. This answer is specific to DB2 on Linux, UNIX and Windows.
If you have LOAD, INSERT and DELETE privileges on the table(s) you can use the LOAD command with an empty file to truncate the tables, regardless of whether there are foreign key constraints:
LOAD from /dev/null of del replace into yourschema.yourtable nonrecoverable
This will place any dependent tables in check pending stateā€¦ Once you have truncated all of your tables you would use the SET INTEGRITY statement to take all of the tables out of check pending.

Related

Run restore on working database. What happens?

What happens when I runned:
zcat /mnt/Postgres/restoreFile.gz | psql my_db
on the working database and after doing ALTER TABLE and other standard things there were problems with duplicated keys. When I stopped it and tried to insert into database then I got duplicates key error because of sequences and constraints. Seems like all data is in but what about the sequences. What really happend with that database?
A normal Postgres backup consists of table design (like create table) and data (like insert) statements. If you run it twice, most design statements will fail. The insert statements would succeed in so far as the data definition allows for duplicate rows.
So restoring a database to a production server would typically result in a lot of duplicate rows in tables without a primary key. Some design changes made after the backup (like changing the owner of a table) may be undone.

How Do I Ignore Errors When Deleting Records

I'm in the process of testing a large number of schema changes to upgrade our db to run with the latest version of a packaged product we have.
At this point I'm not interested in the data contained in the db, only the schema (i.e. the tables, views, constraints, keys, stored procedures, etc.).
My testing entails running scripts, resolving errors, an re-running the scripts. If I want to re-run the scripts I need to first restore the db to get it back to a known state. Restoring the db is very time consuming as it has lots of data. I would like to "slim down" the db and remove as much data as possible. That way it will be quicker to restore the db and re-run my scripts
When I attempt to delete records from many of the tables ("delete from table-name") I run into constraint errors and the command stops.
Is there a way to allow the command to continue and, in effect, delete all the records in the table where there aren't constraint issues? In other words I'd like the command to ignore errors and continue to delete all the records it can.
Any Suggestions would be greatly appreciated.
Byron above makes a good point; you can disable FOREIGN KEY or CHECK constraints with the ALTER TABLE command:
ALTER TABLE TableName NOCHECK CONSTRAINT ALL
Then do your work, and when finished,
ALTER TABLE TableName CHECK CONSTRAINT ALL
to re-enable constraints. Be careful, though: if you leave your data in an inconsistent state that violates constraints once you re-enable them, future updates can fail due to constraint errors.
Sources:
http://weblogs.sqlteam.com/joew/archive/2008/10/01/60719.aspx
http://technet.microsoft.com/en-us/library/ms190273.aspx

Automatically dropping PostgreSQL tables once per day

I have a scenario where I have a central server and a node. Both server and node are capable of running PostgreSQL but the storage space on the node is limited. The node collects data at a high speed and writes the data to its local DB.
The server needs to replicate the data from the node. I plan on accomplishing this with Slony-I or Bucardo.
The node needs to be able to delete all records from its tables at a set interval in order to minimize disk space used. Should I use pgAgent with a job consisting of a script like
DELETE FROM tablex, tabley, tablez;
where the actual batch file to run the script would be something like
#echo off
C:\Progra~1\PostgreSQL\9.1\bin\psql -d database -h localhost -p 5432 -U postgres -f C:\deleteFrom.sql
?
I'm just looking for opinions if this is the best way to accomplish this task or if anyone knows of a more efficient way to pull data from a remote DB and clear that remote DB to save space on the remote node. Thanks for your time.
The most efficient command for you is the TRUNCATE command.
With TRUNCATE, you can chain up tables, like your example:
TRUNCATE tablex, tabley, tablez;
Here's the description from the postgres docs:
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
You may also add CASCADE as a parameter:
CASCADE Automatically truncate all tables that have foreign-key references to any of the named tables, or to any tables added to the group due to CASCADE.
The two best options, depending on your exact needs and workflow, would be truncate, as #Bohemian suggested, or to create a new table, rename, then drop.
We use something much like the latter create/rename/drop method in one of our major projects. This has an advantage where you need to be able to delete some data, but not all data, from a table very quickly. The basic workflow is:
Create a new table with a schema identical to the old one
CREATE new_table LIKE ...
In a transaction, rename the old and new tables simultaneously:
BEGIN;
RENAME table TO old_table;
RENAME new_table TO table;
COMMIT;
[Optional] Now you can do stuff with the old table, while the new table is happily accepting new inserts. You can dump the data to your centralized server, run queries on it, or whatever.
Delete the old table
DROP old_table;
This is an especially useful strategy when you want to keep, say, 7 days of data around, and only discard the 8th day's data all at once. Doing a DELETE in this case can be very slow. By storing the data in partitions (one for each day), it is easy to drop an entire day's data at once.

Can I create Foreign Keys across Databases?

We have 2 databases - DB1 & DB2.
Can I create a table in DB1 that has a relation with one of the tables in DB2?
In other words, can I have a Foreign Key in my table from another database?
I connect to these databases with different users.
Any ideas?
Right now, I receive the error:
ORA-00942:Table or view does not exist
No, Oracle does not allow you to create a foreign key constraint that references a table via a database link. You would have to use triggers to enforce the integrity.
One way to deal with this would be to create a materialized view of the master table on the local database, then create the integrity constraint pointing to the MV.
That works. But it can lead to some problems. First, if you ever need to do a complete refresh of the materialized view, you'll need to disable the constraint before doing do. Otherwise, Oracle won't be able to delete the rows in the MV before bringing in the new rows.
Second, you may run into some timing delays. For example say you add a record to the master table on the remote site. Then you want to add a child record to the local table. But the MV is set to refresh daily and that hasn't happened yet. You'll get a foreign key violation, simply because the MV hasn't refreshed.
If you go this route, your safest approach is to set the MV to fast refresh on commit of the master table. That'll mean keeping a DB Link open nearly all the time. And you'll have admin work to do if you ever need to do a complete refresh.
All in all, we've generally found that a trigger is easier. In some cases, we've simply defined the FK in our logical model but implemented it manually by setting up a daily job that will check for violations and alert staff. Of course, we're pretty careful so those alerts are exceedingly rare.

deleting a large number of rows from a table

We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.
In the past I have solved this kind of problem by exporting the data and re-loading it with a replace style command. For example:
EXPORT to myfile.ixf OF ixf
SELECT *
FROM my_table
WHERE last_modified < CURRENT TIMESTAMP - 30 DAYS;
Then you can LOAD it back in, replacing the old stuff.
LOAD FROM myfile.ixf OF ixf
REPLACE INTO my_table
NONRECOVERABLE INDEXING MODE INCREMENTAL;
I'm not sure whether this will be faster or not for you (probably it depends on whether you're deleting more than you're keeping).
Do the foreign keys already have indexes as well?
How do you have your delete action set? CASCADE, NULL, NO ACTION
Use SET INTEGRITY to temporarily disable constraints on the batch process.
http://www.ibm.com/developerworks/data/library/techarticle/dm-0401melnyk/index.html
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r
We modified the tablespace so the lock would occur at the tablespace level instead of at the page level. Once we changed that DB2 only required one lock to do the DELETE and we didn't have any issues with locking. As for the logging, we just asked the customer to be aware of the amount of logging required (as there did not seem to be a solution to get around the logging issue). As for the constraints, we just dropped and recreated them after the delete.
Thanks all for your help.