Dumping a table's content in sqlite3 to be imported into a new database - sql

Is there an easy way of dumping a SQLite database table into a text string with insert statements to be imported into the same table of a different database?
In my specific example, I have a table called log_entries with various columns. At the end of every day, I'd like to create a string which can then be dumped into an other database with a table of the same structure called archive. (And empty the table log_entries)
I know about the attach command to create new databases. I actually wish to add it to an existing one rather than creating a new one every day.
Thanks!

ATTACH "%backup_file%" AS Backup;
INSERT INTO Backup.Archive SELECT * FROM log_entries;
DELETE FROM log_entries;
DETACH Backup;
All you need to do is replace %backup_file% with the path to your backup database. This approach considers that your Archive table is already defined and that you are using the same database file to cumulate your archive.

$ sqlite3 exclusion.sqlite '.dump exclusion'
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE exclusion (word string);
INSERT INTO "exclusion" VALUES('books');
INSERT INTO "exclusion" VALUES('rendezvousing');
INSERT INTO "exclusion" VALUES('motherlands');
INSERT INTO "exclusion" VALUES('excerpt');
...

Related

Is there any way to insert values into a table if the table is snapshottable?

I need to perform an INSERT operation into a table which is created upon a snapshottable location, is that possible?
(I've been working with Hive 1.1)
CREATE TABLE tablename (x string, y string)
LOCATION'/tmp/snapshots_test/';
INSERT INTO TABLE tablename VALUES('x','y');
where /tmp/snapshots_test/ is set as snapshottable:
hdfs dfsadmin -allowSnapshot /tmp/snapshots_test
I've found out that, if the table is partitioned, it is possible to perform an insert operation even if the location is a snapshottable directory, however, it might not work depending on the Hive version.
Anyway, it is always possible to delete just the file inside the snapshottable directory, so to avoid to delete all the directory through a Hive command.

Best way to replace a table

I have a web app running off a database table that is generated from a csv file.
This table needs to update frequently from the csv. The table needs to match the csv exactly, i.e. if a record no longer exists in the csv it should no longer exist in the table or it should be soft deleted.
What is the proper way to do this?
It seems the easiest way would be:
create temp table
import csv to temp table
drop live table
rename temp table to live table name
This will be scripted inside the app so I don't think the downtime will be much as dropping table and renaming shouldn't take too long but it doesn't seem like the safest way to do things as there is a moment where no table exists.
I tried to instead do:
create temp table
import csv to temp table
update records in live table with data from temp table
delete records in live table that don't exist in temp table
In theory that sounded better but it is extremely slow. The first method just takes a few seconds. The second method the update takes a really long time, I let it run for 10 minutes before cancelling it as it wasn't finished.
I did the update like this:
update table_name as t
set
column1 = t.column1,
column2 = t.column2,
-- etc..
from table_name_temp
What is the proper way to handle this situation?
What you want to do is wrap your simple solution within a transaction. This will ensure that your steps are executed atomically. See: https://www.tutorialspoint.com/sql/sql-transactions.htm for more info.
Postgres support ALTER TABLE .. RENAME.
http://www.postgresqltutorial.com/postgresql-rename-table/
https://dba.stackexchange.com/questions/100779/how-to-atomically-replace-table-data-in-postgresql
The rename table method only works if there are no constraints or triggers involved.
In most cases the new table's contents will not differ too much from the old version; the trick is to suppres updates that don't change anything.
In steps:
create temp table
import csv to temp table
delete records from live table that don't exist in temp table # deletes
delete records from temp table that are EXACTLY THE SAME in live table # idempotent updates
update records in live table with data from temp table # actual updates
insert records into live table from temp table that dont yest exist # inserts

HSQL database storage format

I am just starting usage of HSQL database and I'm misunderstanding storage format of data.
I have made simple test program creating entity via Hibernate. I used file-based standalone in-process mode of HSQL.
I got this file of data:
testdb.script
SET DATABASE UNIQUE NAME HSQLDB52B647B0B4
// SET DATABASE lines skipped
INSERT INTO PERSON VALUES(1,'Peter','UUU')
INSERT INTO PERSON VALUES(2,'Nasta','Kuzminova')
INSERT INTO PERSON VALUES(3,'Peter','Sagan')
INSERT INTO PERSON VALUES(4,'Nasta','Kuzminova')
INSERT INTO PERSON VALUES(5,'Peter','Sagan')
INSERT INTO PERSON VALUES(6,'Nasta','Kuzminova')
As I understand when I get a lot of data, all of it will be stored as such SQL script and executed every time of database startup and will be kept in memory?
The INSERT statements in the .script file are for MEMORY tables of the database.
If you create a table with CREATE CACHED TABLE ... or you change a MEMORY table to CACHED with SET TABLE <name> TYPE CACHED the data for the table will be stored in the .data file and there will be no INSERT statements in the .script. file.

Can I insert into different databases depending on a query in SQLite?

I have two SQLite databases attached into one connection: db1 and db2. I have a view that UNIONS the tables from both databases and adds a column 'database' specifying which database it came from. I am trying to create a trigger on insert into the view that will instead insert into the correct database.
Imagine the following schema for table Data:
id INTEGER PRIMARY KEY,
parent INTEGER,
data TEXT
This would be the schema for the view DataView:
id INTEGER PRIMARY KEY,
database TEXT,
parent INTEGER,
data TEXT
What I have so far:
CREATE TRIGGER DataViewInsertTrigger AFTER INSERT ON DataView
BEGIN
INSERT INTO database.Data
SELECT database
FROM DataView
WHERE id=new.parent
END;
Is what I'm trying to do even possible? If so, how would I finish the trigger?
No, you cannot insert into an entirely different database based on information you get in a trigger. The trigger executes with a context that is specific to the database which invoked it. The other database would be in a completely unrelated file, in SQLite.
The fact that you have a single connection attaching the two doesn't make one available from the other. What would happen if you tripped the trigger from a query made via a connection which only loaded the one DB?
Perhaps you want two tables in the same database?
While Borealid is correct that the trigger itself cannot insert into a different file, what you can do is call a custom sqlite function which itself generates a query to insert into a different file.

Ruby File that Will Log All SQL INSERTs and SQL DELETEs (and only those commands)

I am working with a PostgreSQL database. I have written a .rb file I am using to manipulate data in the database. I want to be able to log all the SQL INSERTs and DELETEs elicited by this file. How do I go about that?
At the start of your script, create the needed temporary tables, and adds two triggers, one on insert, one on delete, and have them fire for each row accordingly. it also works with rules:
create temporary table foo_log_ins (like foo);
create rule log_foo_ins as
on insert to foo
do also
insert into foo_log select new.*;
create temporary table foo_log_del (like foo);
create rule log_foo_del as
on delete to foo
do also
insert into foo_log_del select old.*;