How do I dump the data of some SQLite3 tables? - sql

How do I dump the data, and only the data, not the schema, of some SQLite3 tables of a database (not all the tables)?
The dump should be in SQL format, as it should be easily re-entered into the database later and should be done from the command line. Something like
sqlite3 db .dump
but without dumping the schema and selecting which tables to dump.

You're not saying what you wish to do with the dumped file.
To get a CSV file (which can be imported into almost everything)
.mode csv
-- use '.separator SOME_STRING' for something other than a comma.
.headers on
.out file.csv
select * from MyTable;
To get an SQL file (which can be reinserted into a different SQLite database)
.mode insert <target_table_name>
.out file.sql
select * from MyTable;

You can do this getting difference of .schema and .dump commands. for example with grep:
sqlite3 some.db .schema > schema.sql
sqlite3 some.db .dump > dump.sql
grep -vx -f schema.sql dump.sql > data.sql
data.sql file will contain only data without schema, something like this:
BEGIN TRANSACTION;
INSERT INTO "table1" VALUES ...;
...
INSERT INTO "table2" VALUES ...;
...
COMMIT;

You can specify one or more table arguments to the .dump special command, e.g.sqlite3 db ".dump 'table1' 'table2'".

Not the best way, but at lease does not need external tools (except grep, which is standard on *nix boxes anyway)
sqlite3 database.db3 .dump | grep '^INSERT INTO "tablename"'
but you do need to do this command for each table you are looking for though.
Note that this does not include schema.

Any answer which suggests using grep to exclude the CREATE lines or just grab the INSERT lines from the sqlite3 $DB .dump output will fail badly. The CREATE TABLE commands list one column per line (so excluding CREATE won't get all of it), and values on the INSERT lines can have embedded newlines (so you can't grab just the INSERT lines).
for t in $(sqlite3 $DB .tables); do
echo -e ".mode insert $t\nselect * from $t;"
done | sqlite3 $DB > backup.sql
Tested on sqlite3 version 3.6.20.
If you want to exclude certain tables you can filter them with $(sqlite $DB .tables | grep -v -e one -e two -e three), or if you want to get a specific subset replace that with one two three.

As an improvement to Paul Egan's answer, this can be accomplished as follows:
sqlite3 database.db3 '.dump "table1" "table2"' | grep '^INSERT'
--or--
sqlite3 database.db3 '.dump "table1" "table2"' | grep -v '^CREATE'
The caveat, of course, is that you have to have grep installed.

In Python or Java or any high level language the .dump does not work. We need to code the conversion to CSV by hand. I give an Python example. Others, examples would be appreciated:
from os import path
import csv
def convert_to_csv(directory, db_name):
conn = sqlite3.connect(path.join(directory, db_name + '.db'))
cursor = conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
for table in tables:
table = table[0]
cursor.execute('SELECT * FROM ' + table)
column_names = [column_name[0] for column_name in cursor.description]
with open(path.join(directory, table + '.csv'), 'w') as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow(column_names)
while True:
try:
csv_writer.writerow(cursor.fetchone())
except csv.Error:
break
If you have 'panel data, in other words many individual entries with id's add this to the with look and it also dumps summary statistics:
if 'id' in column_names:
with open(path.join(directory, table + '_aggregate.csv'), 'w') as csv_file:
csv_writer = csv.writer(csv_file)
column_names.remove('id')
column_names.remove('round')
sum_string = ','.join('sum(%s)' % item for item in column_names)
cursor.execute('SELECT round, ' + sum_string +' FROM ' + table + ' GROUP BY round;')
csv_writer.writerow(['round'] + column_names)
while True:
try:
csv_writer.writerow(cursor.fetchone())
except csv.Error:
break

Review of other possible solutions
Include only INSERTs
sqlite3 database.db3 .dump | grep '^INSERT INTO "tablename"'
Easy to implement but it will fail if any of your columns include new lines
SQLite insert mode
for t in $(sqlite3 $DB .tables); do
echo -e ".mode insert $t\nselect * from $t;"
done | sqlite3 $DB > backup.sql
This is a nice and customizable solution, but it doesn't work if your columns have blob objects like 'Geometry' type in spatialite
Diff the dump with the schema
sqlite3 some.db .schema > schema.sql
sqlite3 some.db .dump > dump.sql
grep -v -f schema.sql dump > data.sql
Not sure why, but is not working for me
Another (new) possible solution
Probably there is not a best answer to this question, but one that is working for me is grep the inserts taking into account that be new lines in the column values with an expression like this
grep -Pzo "(?s)^INSERT.*\);[ \t]*$"
To select the tables do be dumped .dump admits a LIKE argument to match the table names, but if this is not enough probably a simple script is better option
TABLES='table1 table2 table3'
echo '' > /tmp/backup.sql
for t in $TABLES ; do
echo -e ".dump ${t}" | sqlite3 database.db3 | grep -Pzo "(?s)^INSERT.*?\);$" >> /tmp/backup.sql
done
or, something more elaborated to respect foreign keys and encapsulate all the dump in only one transaction
TABLES='table1 table2 table3'
echo 'BEGIN TRANSACTION;' > /tmp/backup.sql
echo '' >> /tmp/backup.sql
for t in $TABLES ; do
echo -e ".dump ${t}" | sqlite3 $1 | grep -Pzo "(?s)^INSERT.*?\);$" | grep -v -e 'PRAGMA foreign_keys=OFF;' -e 'BEGIN TRANSACTION;' -e 'COMMIT;' >> /tmp/backup.sql
done
echo '' >> /tmp/backup.sql
echo 'COMMIT;' >> /tmp/backup.sql
Take into account that the grep expression will fail if ); is a string present in any of the columns
To restore it (in a database with the tables already created)
sqlite3 -bail database.db3 < /tmp/backup.sql

According to the SQLite documentation for the Command Line Shell For SQLite you can export an SQLite table (or part of a table) as CSV, simply by setting the "mode" to "csv" and then run a query to extract the desired rows of the table:
sqlite> .header on
sqlite> .mode csv
sqlite> .once c:/work/dataout.csv
sqlite> SELECT * FROM tab1;
sqlite> .exit
Then use the ".import" command to import CSV (comma separated value) data into an SQLite table:
sqlite> .mode csv
sqlite> .import C:/work/dataout.csv tab1
sqlite> .exit
Please read the further documentation about the two cases to consider: (1) Table "tab1" does not previously exist and (2) table "tab1" does already exist.

The best method would be to take the code the sqlite3 db dump would do, excluding schema parts.
Example pseudo code:
SELECT 'INSERT INTO ' || tableName || ' VALUES( ' ||
{for each value} ' quote(' || value || ')' (+ commas until final)
|| ')' FROM 'tableName' ORDER BY rowid DESC
See: src/shell.c:838 (for sqlite-3.5.9) for actual code
You might even just take that shell and comment out the schema parts and use that.

This version works well with newlines inside inserts:
sqlite3 database.sqlite3 .dump | grep -v '^CREATE'
In practice excludes all the lines starting with CREATE which is less likely to contain newlines

The answer by retracile should be the closest one, yet it does not work for my case. One insert query just broke in the middle and the export just stopped. Not sure what is the reason. However It works fine during .dump.
Finally I wrote a tool for the split up the SQL generated from .dump:
https://github.com/motherapp/sqlite_sql_parser/

You could do a select on the tables inserting commas after each field to produce a csv, or use a GUI tool to return all the data and save it to a csv.

Related

Duplicating a SQLite table, indexes, and data [duplicate]

Is there an easy way to copy an existing table structure to a new one?
(dont need the data, only the structure -> like id INTEGER, name varchar(20) ...)
Thx
You could use a command like this:
CREATE TABLE copied AS SELECT * FROM mytable WHERE 0
but due to SQLite's dynamic typing, most type information would be lost.
If you need just a table that behaves like the original, i.e., has the same number and names of columns, and can store the same values, this is enough.
If you really need the type information exactly like the original, you can read the original SQL CREATE TABLE statement from the sqlite_master table, like this:
SELECT sql FROM sqlite_master WHERE type='table' AND name='mytable'
SQLite cannot clone table with PK, defaults and indices.
Hacking by another tool is necessary.
In shell, replace the table name by sed.
sqlite3 dbfile '.schema oldtable' | sed '1s/oldtable/newtable/' | sqlite3 dbfile
And you can check new table.
sqlite3 dbfile '.schema newtable'
Primary key, defaults and indices will be reserved.
I hope this command can help you.
sqlite> .schema
CREATE TABLE [About](
[id],
[name],
[value]);
.schema command will give you structure of About-table how it could be made by programming SQLite interpreter by hand, typing in commands.
Paste in and execute, the CREATE block giving the table new name:
sqlite> CREATE TABLE [AboutToo](
[id],
[name],
[value]);
.tables command now will show you have two tables, old and new, "copied".
sqlite> .tables
About AboutToo
p.s. sqlite> is command prompt you get in console after launching SQLite.exe interpreter. To get it go to www.sqlite.org
Just for the record - This worked for me:
CREATE TABLE mytable (
contact_id INTEGER PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
phone TEXT NOT NULL UNIQUE
);
-- Two variations
INSERT INTO mytable VALUES ( 1, "Donald", "Duck", "noone#nowhere.com", "1234");
INSERT INTO mytable ( contact_id,first_name,last_name,email,phone ) VALUES ( 2, "Daisy", "Duck", "daisy#nowhere.com", "45678");
.output copied.sql
-- Add new table name
.print CREATE TABLE copied (
-- Comment out first line from SQL
SELECT "-- " || sql FROM sqlite_master WHERE type='table';
.output
.read copied.sql
.schema
select * from copied;
Beware that this only works if schema is wrapped after CREATE TABLE mytable (.
Otherwise you'll need some string replacement using .system
Yes by using the SQLiteStudio you can use the last icon in the structure table called create similar table from any existing table.
I would prefer :
> sqlite3 <db_file>
sqlite3 > .output <output_file>
sqlite3 > .dump <table_name>
The line above generates the dump of table that includes DDL and DML statement.
Make changes in this file, i.e. find and replace the table name with new table name
Also, replace "CREATE TRIGGER " with "CREATE TRIGGER <NEWTABLE>_" , this will replace existing triggers with trigger names with a new table name on it. That will make it unique and will not cause conflicts with existing triggers. Once all schema changes are implemented, read it back into database using .read
sqlite3 > .read output_file
This can be scripted in shell file using shell commands like :
echo ".dump <table>" | sqlite3 <db_file> > <table_file>
sed -i.bak "s/\b<table_name>\b/<new_table_name>/g" <table_file>
sed -i.bak "s/\bCREATE TRIGGER \b/CREATE TRIGGER <new_table_name_>/g" <table_file>
echo ".read <table_file>" | sqlite3 <db_file>
rm <table_name>.bak
For example :
If you have table T and new table is TClone in db file D with file F to be created : then
echo ".dump T" | sqlite3 D.sqlite > F
sed -i.bak "s/\bT\b/TClone/g" F
sed -i.bak "s/\bCREATE TRIGGER \b/CREATE TRIGGER TClone_>/g" F
echo ".read F" | sqlite3 D.sqlite
rm T.bak
Finally, you can generalize this script by creating a parameterized version where you can pass source_table, destination_table , db_file as parameters that can be used to clone any table.
I tested this and it works.
Testing :
sqlite3 <database_file>
sqlite3 > select * from <new_table>;
should give you same results as original table. and
sqlite3 > .schema <new_table>
should have same schema as that of original table with a new name.

sql insert query through text file

I'm trying to insert data from excel sheet to sql database. The query is stored in a text file as follows:
insert into [demo].[dbo].[relative]
select *
from openrowset('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=D:\relative.xls','select * from [sheet1$]');
When I am executing the following command:
sqlcmd -S ADMIN-PC/SEXPRESS -i d:\demo.txt.
it is showing this error:
Msg 7357, Level 16, State 2, Server ADMIN-PC\SEXPRESS, Line 1
Can anyone please help in rectifying my problem.
Try using the sql server import vizard to create a new table from the xls file and then insert that data to the existing table from there. The problem you are having is maybe due to the non-compatibility between 64bit sql instance and 32 bit excel.
Or try using bcp
bcp demo.dbo.relative in "D:\relative.xls" -c -T
There is another way to get the same result..
create a temp table.
declare #sometable table (value varchar(50), translation varchar(max))
select * into #sometable from YOUR_DATABASE_TABLE (nolock)
Then, do your OPENROWSET, BCP, etc. from here..
You can create a shell script which will automatically read the insert commands from the .csv file and then write it to the database. If you want I can help you up with it. What you just need to do is to write all the insert statements in the .csv file.
#!/bin/ksh
sqlplus -silent /nolog << EOF > /dev/null
username/pwd#"Connection String"
set linesize 0;
set pagesize 0;
set echo off;
while read line; do
A=`echo "$line" | awk -F" " {print 1}`
and so on depends on the number of words in the insert statements.
$A $B
done < your_insert_statements.csv
It will read the .csv file and automatically insert the records in the database.

Copy table structure to new table in sqlite3

Is there an easy way to copy an existing table structure to a new one?
(dont need the data, only the structure -> like id INTEGER, name varchar(20) ...)
Thx
You could use a command like this:
CREATE TABLE copied AS SELECT * FROM mytable WHERE 0
but due to SQLite's dynamic typing, most type information would be lost.
If you need just a table that behaves like the original, i.e., has the same number and names of columns, and can store the same values, this is enough.
If you really need the type information exactly like the original, you can read the original SQL CREATE TABLE statement from the sqlite_master table, like this:
SELECT sql FROM sqlite_master WHERE type='table' AND name='mytable'
SQLite cannot clone table with PK, defaults and indices.
Hacking by another tool is necessary.
In shell, replace the table name by sed.
sqlite3 dbfile '.schema oldtable' | sed '1s/oldtable/newtable/' | sqlite3 dbfile
And you can check new table.
sqlite3 dbfile '.schema newtable'
Primary key, defaults and indices will be reserved.
I hope this command can help you.
sqlite> .schema
CREATE TABLE [About](
[id],
[name],
[value]);
.schema command will give you structure of About-table how it could be made by programming SQLite interpreter by hand, typing in commands.
Paste in and execute, the CREATE block giving the table new name:
sqlite> CREATE TABLE [AboutToo](
[id],
[name],
[value]);
.tables command now will show you have two tables, old and new, "copied".
sqlite> .tables
About AboutToo
p.s. sqlite> is command prompt you get in console after launching SQLite.exe interpreter. To get it go to www.sqlite.org
Just for the record - This worked for me:
CREATE TABLE mytable (
contact_id INTEGER PRIMARY KEY,
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
phone TEXT NOT NULL UNIQUE
);
-- Two variations
INSERT INTO mytable VALUES ( 1, "Donald", "Duck", "noone#nowhere.com", "1234");
INSERT INTO mytable ( contact_id,first_name,last_name,email,phone ) VALUES ( 2, "Daisy", "Duck", "daisy#nowhere.com", "45678");
.output copied.sql
-- Add new table name
.print CREATE TABLE copied (
-- Comment out first line from SQL
SELECT "-- " || sql FROM sqlite_master WHERE type='table';
.output
.read copied.sql
.schema
select * from copied;
Beware that this only works if schema is wrapped after CREATE TABLE mytable (.
Otherwise you'll need some string replacement using .system
Yes by using the SQLiteStudio you can use the last icon in the structure table called create similar table from any existing table.
I would prefer :
> sqlite3 <db_file>
sqlite3 > .output <output_file>
sqlite3 > .dump <table_name>
The line above generates the dump of table that includes DDL and DML statement.
Make changes in this file, i.e. find and replace the table name with new table name
Also, replace "CREATE TRIGGER " with "CREATE TRIGGER <NEWTABLE>_" , this will replace existing triggers with trigger names with a new table name on it. That will make it unique and will not cause conflicts with existing triggers. Once all schema changes are implemented, read it back into database using .read
sqlite3 > .read output_file
This can be scripted in shell file using shell commands like :
echo ".dump <table>" | sqlite3 <db_file> > <table_file>
sed -i.bak "s/\b<table_name>\b/<new_table_name>/g" <table_file>
sed -i.bak "s/\bCREATE TRIGGER \b/CREATE TRIGGER <new_table_name_>/g" <table_file>
echo ".read <table_file>" | sqlite3 <db_file>
rm <table_name>.bak
For example :
If you have table T and new table is TClone in db file D with file F to be created : then
echo ".dump T" | sqlite3 D.sqlite > F
sed -i.bak "s/\bT\b/TClone/g" F
sed -i.bak "s/\bCREATE TRIGGER \b/CREATE TRIGGER TClone_>/g" F
echo ".read F" | sqlite3 D.sqlite
rm T.bak
Finally, you can generalize this script by creating a parameterized version where you can pass source_table, destination_table , db_file as parameters that can be used to clone any table.
I tested this and it works.
Testing :
sqlite3 <database_file>
sqlite3 > select * from <new_table>;
should give you same results as original table. and
sqlite3 > .schema <new_table>
should have same schema as that of original table with a new name.

Export an entire database data ( db2 ) to csv file

I have been looking for a command to export an entire database data (db2) to csv.
I did google it but it came up with db2 export command which only export table by table.
For example
export to employee.csv of del select * from employee
Therefore I have to do it for all the table and it can be very annoying. Is there a way I can export an entire database in db2 to csv? (or some other format that I can use with other databases)
Thank you
You could read the SYSIBM.SYSTABLES table to get the names of all the tables, and generate an export command for each table.
Write the export commands to an SQL file.
Read the SQL file, and execute the export commands.
Edited to add: Warning - some of your foreign keys may not be synchronized, as the data base can be changed while you're reading the various tables.
# loop trough all the db2 db tables in bash and export them
# all this is just another oneliner ...
# note the filter by schema name ...
db2 -x "select schemaname from syscat.schemata where 1=1 and schemaname like 'GOSALES%'" | { while read -r schema ; do db2 connect to GS_DB ; db2 -x "SELECT TABLE_NAME FROM SYSIBM.TABLES WHERE 1=1 AND TABLE_CATALOG='GS_DB' AND TABLE_SCHEMA = '$schema'" | { while read -r table ; do db2 connect to GS_DB; echo -e "start table: $table \n" ; db2 -td# "EXPORT TO $schema.$table.csv of del modified by coldel; SELECT * FROM $schema.$table " ; echo -e " stop table: $table " ; done ; } ; done ; }
wc -l *.csv | sort -nr
4185939 total
446023 GOSALES.ORDER_DETAILS.csv
446023 GOSALESDW.SLS_SALES_ORDER_DIM.csv

How can I dump data that I want of some SQLite3 tables in SQL format?

I know how to select which table I want to dump in SQL format with the shell command:
$ ./sqlite3 test.db '.dump mytable' > test.sql
But this command selects all the data of "mytable"
Can I select the data in my table that I want before dump and how?
In other terms I seek a command like :
$ ./sqlite3 test.db '.dump select name from mytable' > test.sql
Obviously this command does not work :'(
The only way to do it within the sqlite console is to create a temporary table:
CREATE TABLE tmp AS SELECT (field1, field2 ... ) FROM yourTable WHERE ... ;
.dump tmp
DROP TABLE tmp