Clean everything from database tables - sql

I have some database with a lot tables , and i want to clean the data from tables how i can do it ?
like doing for loop on every table in the database DELETE FROM table_name;
Thanks.

Another alternative
use mysqldump to dump table schema only
use add-drop table
once you have the dump file, execute it
all the table will be drop and re-create
If you want to reset your auto increment,
then make changes to the dump file to reset it
if you are using linux, this can be done by a bash script, like
for table in $(mysql -N <<<"show tables from your_db")
do
mysql -N <<< "truncated $table"
done

One way is to create a script which will generate all the sql statements for you. Here is a version for linux:
echo "select concat('truncate ', table_name, ';') as '' from \
information_schema.tables where table_schema='YOURDATABASE';"\
|mysql -u USER -p > /tmp/truncate-all-tables.sql

The simplest way to do this is simply to issue a DELETE or TRUNCATE command on a per-table basis either from a .sql batch file or via a scripting language. (Whilst it's possible to interrogate the table schema information and use this, this may not be suitable depending on whether or not you have the required access privs and whether the tables in question follow a SELECTable naming convention.)
Incidentally, whilst you can use DELETE, this won't reclaim the used disk space and any auto_increment fields will remember the previous ID. (If you want to get the disk space back and reset the IDs to 1, use "TRUNCATE <table name>;".)

Related

SymmetricDS replication DDL statements

Now I try to replicate DDL statements(like create, alter, drop) between different databases with the help of Symmetric-DS.
I've found this page https://www.symmetricds.org/docs/how-to/sync-schema-ddl-changes and found that it could be done it that way:
bin/symadmin -e root-000 --node=001 sync-triggers
bin/symadmin -e root-000 --node=001 send-schema
But, for now I cannot understand that is there possibility to replicate DDL statements automatically? I.e. I create table on one database and it creates on another automatically (without writing sync-triggers and send-schema)?
Thanks.

create tables with a bash file from (Postgres) sql select statement stored in files

Given a set of complex, consecutive (Postgres) SQL select statements, each stored in a single sql-file, how can I write a bash-script dropping (if exists) and creating a table with the results of each statement having the same table name as file name.
background: we share these sql-files in a git repo where different users want to use the statements in a different way. I want automatically create tables, the others use temp tables, thus I dont want to write the 'create table...' in the sql-file's headers.
A skeleton for your shell script could look like this:
set -e # stop immediately on any error
for script in s1 s2 s3
do
echo "processing $script"
select=`cat $script`
psql -d dbname -U user <<EOF
DROP TABLE IF EXISTS "$script";
CREATE TABLE "$script" AS $select ;
EOF
done
Note however that any SELECT is not necessarily suitable as the source for a CREATE TABLE .. AS SELECT...
As the simplest example, consider the case when two different columns share the same name. This is legal in a SELECT, but an error condition when creating a table from it.

Automatically dropping PostgreSQL tables once per day

I have a scenario where I have a central server and a node. Both server and node are capable of running PostgreSQL but the storage space on the node is limited. The node collects data at a high speed and writes the data to its local DB.
The server needs to replicate the data from the node. I plan on accomplishing this with Slony-I or Bucardo.
The node needs to be able to delete all records from its tables at a set interval in order to minimize disk space used. Should I use pgAgent with a job consisting of a script like
DELETE FROM tablex, tabley, tablez;
where the actual batch file to run the script would be something like
#echo off
C:\Progra~1\PostgreSQL\9.1\bin\psql -d database -h localhost -p 5432 -U postgres -f C:\deleteFrom.sql
?
I'm just looking for opinions if this is the best way to accomplish this task or if anyone knows of a more efficient way to pull data from a remote DB and clear that remote DB to save space on the remote node. Thanks for your time.
The most efficient command for you is the TRUNCATE command.
With TRUNCATE, you can chain up tables, like your example:
TRUNCATE tablex, tabley, tablez;
Here's the description from the postgres docs:
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
You may also add CASCADE as a parameter:
CASCADE Automatically truncate all tables that have foreign-key references to any of the named tables, or to any tables added to the group due to CASCADE.
The two best options, depending on your exact needs and workflow, would be truncate, as #Bohemian suggested, or to create a new table, rename, then drop.
We use something much like the latter create/rename/drop method in one of our major projects. This has an advantage where you need to be able to delete some data, but not all data, from a table very quickly. The basic workflow is:
Create a new table with a schema identical to the old one
CREATE new_table LIKE ...
In a transaction, rename the old and new tables simultaneously:
BEGIN;
RENAME table TO old_table;
RENAME new_table TO table;
COMMIT;
[Optional] Now you can do stuff with the old table, while the new table is happily accepting new inserts. You can dump the data to your centralized server, run queries on it, or whatever.
Delete the old table
DROP old_table;
This is an especially useful strategy when you want to keep, say, 7 days of data around, and only discard the 8th day's data all at once. Doing a DELETE in this case can be very slow. By storing the data in partitions (one for each day), it is easy to drop an entire day's data at once.

Given a SQL Express database base file (.MDF) how can I wipe/clear/reset the schema and/or data?

Whats options do I have to clear the schema and data from a MDF file? What options to delete all the data?
To reset a databases schema, it seems I need to copy a file from a backup of the database when it was empty. I was wondering if there was a simpler or more efficient way.
To clear all data, it seems I'd need to write a script. The script would disable constraints, then drop all rows from each table before turning back on constraints. This is straightforward but does require I discover/track what tables exist in the database. Maybe its not sufficient or there is an easier approach?
I'm not sure what the point is of 'clearing the schema' - surely a new database already has a 'clear' schema.. BUT, you can create a new database in code via the following T-SQL:
USE Master
CREATE DATABASE NewDb (NAME=NewDbFile, FILENAME= '<filepath>')
If you need a file (an MDF) you can then detach the database too with sp_detach_db and you can then move it as required from the location specified above:
EXEC sp_detach_db NewDb
To clear the data you can use sp_msforeachtable with a truncation command - it is a non-logged operation, and does not check constraints, nor foreign keys - however, it cannot be rolled back!
EXEC sp_msforeachtable 'TRUNCATE TABLE ?'

How to drop all triggers in a Firebird 1.5 database

For debug purposes I need to send 1 table of an existing Firebird 1.5 database to someone.
In stead of sending the whole db , I want to send just the db with just this table - no triggers, no constraints. I can't copy the data to another db because it's just that that we want to check - why this one table is given troubles.
I am just wondering if there is a way to drop all triggers , all constraints and all but one table (using some clever trick with the system tables or so ) ?
Using GUI tool (I personally prefer IBExpert) execute following command:
select 'DROP TRIGGER ' || rdb$trigger_name || ';' from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null)
Copy result into clipboard then paste and execute within script executive
window.
If your database backup can switch to Firebird 2.1 there is some switch in gbak and isql.
Some Firebird command-line tools have
been supplied with new switches to
suppress the automatic firing of
database triggers:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
These switches can only be used by the
database owner and SYSDBA.
You can drop all triggers by directly deleting them from the system table, like so:
delete from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null);
Note that the normal way of using drop trigger is certainly preferable, but it can be done.
You can also drop constraints by executing DDL statements, but to enumerate constraints and drop them in a SQL script you would need the execute block functionality that Firebird 1.5 doesn't have.
There are similar statements to delete other database objects, but actually running these successfully may be much more difficult because of dependencies between objects. You can't drop any object as long as another object depends on it. This can become really tricky due to circular references, where two (or even more) objects depend on one another, forming a cycle, so there isn't a single one that may be dropped first.
The way around this is to break one of the dependencies. A procedure for example that has dependencies to other objects can be altered to have an empty body, after which it does no longer depend on those other objects, so they may be dropped then. Dropping foreign keys is another way of eliminating dependencies between tables.
I don't know of any tool implementing such a partial delete of database objects, your use case is IMO far from common. You could however have a look at the FlameRobin source code which has a certain amount of dependency detection in the code that is used to create DDL scripts or modification statements for database objects. Armed with that information you could write your own tool to do it.
If it's a one time thing it may be enough to do this manually, though. Use any Firebird management tool of your choice for that.