Import SQL statements from another SQL script - sql

I have an SQL script that does a bunch of stuff to some existing tables in my DB, but part of that scripts need to operate with data present on a non-existing table in order to update some values.
That table is only temporary; I have an .sql file that contains:
CREATE TABLE big_table
POPULATION of that same big_table
So the idea is, my SQL script contains the import of that .sql file, that should execute it (it will create the big_table in my DB and populate it), then my already existing script will execute all it's operations and then I'll just run a DROP TABLE big_table since I'll no longer need it.
Being an POSTGRESQL DDBB, I've thought about:
db_name < file.sql
, also:
mysql -u username -p db_name < file.sql
, but that I cannot include into an SQL script, I can only run that trough a shell which is not the idea.

Related

How to save SQL query results as a list of INSERT statements into a .sql file

I am working on a PostgreSQL database. I would like to get the results of a SELECT query as a list of INSERT statements and have them saved into a .sql file (let's say exported_data.sql).
This is because later I would like to run the .sql file on another database and have the data "imported" in the table, so that the table gets "updated".
psql my_database -h my_host < exported_data.sql
Let's say my query is:
SELECT *
FROM params_table
WHERE params_type = 'tkt';
From this thread I see that one way to do it is to:
Create a temporary table filled with the results of the query
CREATE TABLE temporary_table
AS
SELECT *
FROM params_table
WHERE params_type = 'tkt';
Export the table as a list of INSERT statements into a .sql file
pg_dump --table=temporary_table --data-only --column-inserts my_database -h my_host > exported_data.sql
Remove the table, to keep the database clean.
DROP TABLE temporary_table;
Is there a way to do it in one shot?
I am asking just because I would not like to create and then remove a temporary table every time I need to export some data.
I saw from this thread that for a MySQL database it is possible to do it one-shot:
mysqldump -u root -p my_database -h my_host params_table --where="params_type = 'tkt'" > exported_data.sql
So I have read the Postgresql documentation for pg_dump, but I cannot find any equivalent for --where clause.

pg_restore clean table psql

I am trying to restore a table I dumped to its current location. The table was dumped using:
pg_dump -t table db > table.sql -U username
Trying to restore, I'm using:
pg_restore -c --dbname=db --table=table table.sql
I am using -c since the table currently exists and has data, however it returns:
pg_restore: [archiver] input file appears to be a text format dump. Please use psql.
I've also tried:
-bash-4.2$ psql -U username -d db -1 -f table.sql
But since the data is there already and there's no --clean option for this psql command (I believe), it returns:
psql:table.sql:32: ERROR: relation "table" already exists
Is there a way to use pg_restore correctly or use psql with a --clean option?
Bad news for you, friend. You can’t pg_restore from a plan-text dump. It’s in the docs:
“pg_restore is a utility for restoring a PostgreSQL database from an archive created by pg_dump in one of the non-plain-text formats.” https://www.postgresql.org/docs/9.6/static/app-pgrestore.html
If you can re-execute pg_dump, you can do so with the -Fc flag, which will produce an artifact that you can restore with pg_restore. If you are stuck with the table.sql in plain-text format, you do have a few options (I’m calling your target table my_fancy_table in the two examples below):
Option 1:
Drop the table: drop table my_fancy_table;
Now run your psql command: psql <options> -f table.sql
If you have referential integrity where other tables necessitate rows being there, you might not be able to drop the table.
Option 2:
Upsert from a temp table
You can edit the sql file (since it’s in plain text) and change the table name to my_fancy_table_temp (or any table name that does not yet exist). Now you'll have two tables, a temp one from the sql file and the real one that's been there the whole time. You can then write some upsert SQL to insert or update the real table with the rows from the temp table like so:
insert into my_facy_table (column1, column2)
select column1, column2 from my_fancy_table_temp
on conflict (my_fancy_unique_index)
update my_fancy_table
set column1 = excluded.column1,
column2 = excluded.column2

create tables with a bash file from (Postgres) sql select statement stored in files

Given a set of complex, consecutive (Postgres) SQL select statements, each stored in a single sql-file, how can I write a bash-script dropping (if exists) and creating a table with the results of each statement having the same table name as file name.
background: we share these sql-files in a git repo where different users want to use the statements in a different way. I want automatically create tables, the others use temp tables, thus I dont want to write the 'create table...' in the sql-file's headers.
A skeleton for your shell script could look like this:
set -e # stop immediately on any error
for script in s1 s2 s3
do
echo "processing $script"
select=`cat $script`
psql -d dbname -U user <<EOF
DROP TABLE IF EXISTS "$script";
CREATE TABLE "$script" AS $select ;
EOF
done
Note however that any SELECT is not necessarily suitable as the source for a CREATE TABLE .. AS SELECT...
As the simplest example, consider the case when two different columns share the same name. This is legal in a SELECT, but an error condition when creating a table from it.

Delete rows of a table specified in a text file in Postgres

I have a text file containing the row numbers of the rows that should be deleted in my table like this:
3
32
40
55
[...]
How can I get a PostgreSQL compatible SQL statement which deletes each of these rows from my table using the text file?
Doing it once could look like this:
CREATE TEMP TABLE tmp_x (nr int);
COPY tmp_x FROM '/absolute/path/to/file';
DELETE FROM mytable d
USING tmp_x
WHERE d.mycol = tmp_x.nr;
DROP TABLE tmp_x; -- optional
Or use the psql meta-command \copy. The manual:
COPY naming a file or command is only allowed to database superusers
or users who are granted one of the roles pg_read_server_files,
pg_write_server_files, or pg_execute_server_program, since it
allows reading or writing any file or running a program that the
server has privileges to access.
Do not confuse COPY with the psql instruction \copy. \copy
invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores
the data in a file accessible to the psql client. Thus, file
accessibility and access rights depend on the client rather than the
server when \copy is used.
For repeated use, wrap it into a PL/pgSQL function with file-path / table name / column name as parameters. If any identifiers are dynamic you must use EXECUTE for the DELETE.
If you work with \copy, you have to do that in psql in the same session before executing SQL commands (possibly wrapped in a server-side function).
I have a slightly different solution than Erwin's. I would use IN because performing a JOIN (USING) it would increase the number of rows that the query will process.
CREATE TEMP TABLE tmp_x (nr int);
COPY tmp_x FROM '/absolute/path/to/file';
DELETE FROM mytable d
WHERE d.mycol IN (SELECT nr FROM tmp_x);
DROP TABLE tmp_x;

Clean everything from database tables

I have some database with a lot tables , and i want to clean the data from tables how i can do it ?
like doing for loop on every table in the database DELETE FROM table_name;
Thanks.
Another alternative
use mysqldump to dump table schema only
use add-drop table
once you have the dump file, execute it
all the table will be drop and re-create
If you want to reset your auto increment,
then make changes to the dump file to reset it
if you are using linux, this can be done by a bash script, like
for table in $(mysql -N <<<"show tables from your_db")
do
mysql -N <<< "truncated $table"
done
One way is to create a script which will generate all the sql statements for you. Here is a version for linux:
echo "select concat('truncate ', table_name, ';') as '' from \
information_schema.tables where table_schema='YOURDATABASE';"\
|mysql -u USER -p > /tmp/truncate-all-tables.sql
The simplest way to do this is simply to issue a DELETE or TRUNCATE command on a per-table basis either from a .sql batch file or via a scripting language. (Whilst it's possible to interrogate the table schema information and use this, this may not be suitable depending on whether or not you have the required access privs and whether the tables in question follow a SELECTable naming convention.)
Incidentally, whilst you can use DELETE, this won't reclaim the used disk space and any auto_increment fields will remember the previous ID. (If you want to get the disk space back and reset the IDs to 1, use "TRUNCATE <table name>;".)