How can you run a script in multiple PostgreSQL databases in Mac?
for db in first_db second_db ...; do
psql -d $db -f yourscript.sql
done
Of course there will be many variations depending on exactly what you want to do.
Related
Is there any possibility to insert 50k datasets into a postgresql database using dbeaver?
Locally, it worked fine for me, it took me 1 minute, because I also changed the memory settings of postgresql and dbeaver. But for our development environment, 50k queries did not work.
Is there a way to do this anyway or do I need to split the queries and do for example 10k queries 5 times? Any trick?
EDIT: with "did not work" I mean I got an error after 2500 seconds saying something like "too much data ranges"
If you intend to execute a giant script sql via interface: don't even try.
If you have a csv file, DBeaver gives you a tool:
Even better, as described in comments, copy command is the tool.
If you have a giant SQL file you need to use command line, like:
psql -h host -U username -d myDataBase -a -f myInsertFile
Like in this post: Run a PostgreSQL .sql file using command line arguments
I am new to Postgres and i am trying to learn from an online tutorial. One of the first thing is to load the data, as follows:
Finally, run psql -U <username> -f clubdata.sql -d postgres -x -q to
create the 'exercises' database, the Postgres 'pgexercises' user, the
tables, and to load the data in. Note that you may find that the sort
order of your results differs from those shown on the web site:
I am using pdAdmin4 and opened the SQL shell. However I wasn't able to load this database. First of all, how can i figure out what is my current username?
Secondly, I have never worked with command line before and am quite unsure how to do this. Could someone break this down step-by-step?
You can run "psql -h" for more help. You never have a current username as such, you have to specify it but start with "-U postgres" and ask again if that doesn't work.
Your sql file to load will need the folder path or you could try the cmd prompt and change to the folder where your clubdata file is. Your command line assumes there is already a database named postgres which there probably is. Try again;
psql -U postgres -f clubdata.sql -d postgres -x -q
The command psql is for the command line client. You need to run this in a terminal.
I wrestled with this input myself, despite a little CLI experience with psql. It may help to remove the -q flag in the end to make the output non-quiet, then you see what's going on.
Lastly, beware that the import creates a schema, so you need to read up on schemas. See this related question for a bit more background: https://dba.stackexchange.com/questions/264398/cant-find-any-tables-after-psql-dump-import-from-pgexercises-com
I want to take backup of all functions in my postgres database.How to take backup of functions only in Postgres?
use pg_getfunctiondef; see system information functions. pg_getfunctiondef was added in PostgreSQL 8.4.
SELECT pg_get_functiondef('proc_name'::regproc);
To dump all functions in a schema you can query the system tables in pg_catalog; say if you wanted everything from public:
SELECT pg_get_functiondef(f.oid)
FROM pg_catalog.pg_proc f
INNER JOIN pg_catalog.pg_namespace n ON (f.pronamespace = n.oid)
WHERE n.nspname = 'public';
it's trivial to change the above to say "from all schemas except those beginning with pg_" instead if that's what you want.
In psql you can dump this to a file with:
psql -At dbname > /path/to/output/file.sql <<"__END__"
... the above SQL ...
__END__
To run the output in another DB, use something like:
psql -1 -v ON_ERROR_STOP -f /path/to/output/file.sql target_db_name
If you're replicating functions between DBs like this, though, consider storing the authorative copy of the function definitions as a SQL script in a revision control system like svn or git, preferably packaged as a PostgreSQL extension. See packaging extensions.
You can't tell pg_dump to dump only functions. However, you can make a dump without data (-s or --schema-only) and filter it on restoring. Note the --format=c (also -Fc) part: this will produce a file suitable for pg_restore.
First take the dump:
pg_dump -U username --format=c --schema-only -f dump_test your_database
Then create a list of the functions:
pg_restore --list dump_test | grep FUNCTION > function_list
And finally restore them (-L or --use-list specifies the list file created above):
pg_restore -U username -d your_other_database -L function_list dump_test
I have around 150 MySQL databases, I need to export 1 table from each of the databases.
Is this possible, username and password are identical for each DB.
I'm sure there's a more compact way to do it but this should work.
#!/bin/bash
mysql -B -e "show databases" | egrep -v "Database|information_schema" | while read db;
do
echo "$db";
mysqldump $db TableName > $db.sql
done
You may need to tweak the mysql and mysqldump calls depending on your connection information.
I think in this case, iteration would be more appropriate (rather than recursion).
If you are on Linux, I'd suggest writing a simple bash script that cycles the 150 DB URLs and calls mysqldump on each one.
See link text, it generates meta data for all databases and all tables, you may be able to adapt it to export data for you. However this is in PHP and I am not certain of the language you wish to use..
Using Toad for Oracle, I can generate full DDL files describing all tables, views, source code (procedures, functions, packages), sequences, and grants of an Oracle schema. A great feature is that it separates each DDL declaration into different files (a file for each object, be it a table, a procedure, a view, etc.) so I can write code and see the structure of the database without a DB connection. The other benefit of working with DDL files is that I don't have to connect to the database to generate a DDL each time I need to review table definitions. In Toad for Oracle, the way to do this is to go to Database -> Export and select the appropriate menu item depending on what you want to export. It gives you a nice picture of the database at that point in time.
Is there a "batch" tool that exports
- all table DDLs (including indexes, check/referential constraints)
- all source code (separate files for each procedure, function)
- all views
- all sequences
from SQL Server?
What about PostgreSQL?
What about MySQL?
What about Ingres?
I have no preference as to whether the tool is Open Source or Commercial.
For SQL Server:
In SQL Server Management Studio, right click on your database and choose 'Tasks' -> 'Generate Scripts'.
You will be asked to choose which DDL objects to include in your script.
In PostgreSQL, simply use the -s option to pg_dump. You can get it as a plain sql script (one file for the whole database) on in a custom format that you can then throw a script at to get one file per object if you want it.
The PgAdmin tool will also show you each object's SQL dump, but I don't think there's a nice way to get them all at once from there.
For mysql, I use mysqldump. The command is pretty simple.
$ mysqldump [options] db_name [tables]
$ mysqldump [options] --databases db_name1 [db_name2 db_name3...]
$ mysqldump [options] --all-databases
Plenty of options for this. Take a look here for a good reference.
In addition to the "Generate Scripts" wizard in SSMS you can now use mssql-scripter which is a command line tool to generate DDL and DML scripts.
It's an open source and Python-based tool that you can install via:
pip install mssql-scripter.
Here's an example of what you can use to script the database schema and data to a file.
mssql-scripter -S localhost -d AdventureWorks -U sa --schema-and-data > ./adventureworks.sql
More guidelines: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
And here is the link to the GitHub repository: https://github.com/Microsoft/sql-xplat-cli
MySQL has a great tool called MySQL workbench that lets you reverse and forward engineer databases, as well as synchronize, which I really like. You can view the DDL when executing these functions.
I wrote SMOscript which does what you are asking for (referring to MSSQL Server)
Following what Daniel Vassallo said, this worked for me:
pg_dump -f c:\filename.sql -C -n public -O -s -d Moodle3.1 -h localhost -p 5432 -U postgres -w
try this python-based tool: Yet another script to split PostgreSQL dumps into object files