MySQL dump structure of all tables and data of some - sql

I'm trying to dump the structure of all the tables in our database, and then only the data of the ones I specifically want, but i seem to be doing something wrong as I'm not getting the empty tables created for the ones I exclude from the data dump.
I have a text file which specifies which tables I want to dump the data for (called showtables.txt):
SHOW TABLES FROM mydb
WHERE Tables_in_mydb NOT LIKE '%_history'
AND Tables_in_mydb NOT LIKE '%_log';
I am then doing this command to dump the structure of all tables, and then the data of the tables returned by that query in the text file:
mysqldump -u root -pmypassword mydb --no-data > mydump.sql; mysql -u root -pmypassword < showtables.txt -N | xargs mysqldump mydb -u root -pmypassword > mydump.sql -v
I am getting the dump of all the tables included in the results of the showtables query, but I am not getting the structures of the rest of the tables.
If I run just the structure part as a single command, that works fine and I get the structures dumped for all tables. But combining it with the data dump seems to not work.
Can you point me to where I'm going wrong with this?
Thanks.

I think you've got the order of your commandline arguments wrong (the redirection to a file should be the end), and you need an extra parameter for xargs so we can specify the database name to mysqldump.
Additionally, you need to append >> the dump data, otherwise you'd be overwriting the mydump.sql file for each table:
mysqldump -u root -pmypassword mydb --no-data > mydump.sql
mysql -u root -pmypassword -N < showtables.txt | xargs -I {} mysqldump -v -u root -pmypassword mydb {} >> mydump.sql
Sources: http://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/

Working off of Jon's answer, but the -I in xargs will run a separate mysqldump command for each table. Easier to just allow the xargs default which appends the output of the previous command to the next command. mysqldump's last argument is a list of all tables you'd like to dump.
My solution also shows connecting through a bastion host. gzip'ing before streaming over the SSH connection is vastly faster than sending the uncompressed SQL over the wire.
FILE=~/production.sql.gz
HOST=ext-db-read-0.cdzvblmx0n9h.us-west-1.rds.amazonaws.com
USER=username
PASS="s3cret"
DB=myapp_prod
EXCLUDE="'activities', 'changelogs'"
ssh bastion.mycompany.com <<EOF > $FILE
mysqldump -h $HOST -u $USER -p$PASS $DB --no-data | gzip
mysql -h $HOST -u $USER -p$PASS -N -e "SHOW TABLES WHERE Tables_in_$DB NOT IN ($EXCLUDE)" $DB | xargs mysqldump -v -h $HOST -u $USER -p$PASS $DB | gzip
EOF
If you don't want to save .gz just pipe it through gzip -d:
ssh bastion.mycompany.com <<EOF | gzip -d > $FILE
etc
or directly to your local db:
ssh bastion.mycompany.com <<EOF | gzip -d | mysql -uroot myapp_development

Related

How to execute multiple SQL files from a local directory in PostgreSQL/PgAdmin4 in on transaction in Windows?

I have a folder in a directory in my PC which contains multiple SQL files. Each of the file is a Postgres function. I want to execute every SQL file situated in the folder at a time in PostgreSQL server using PgAdmin or in other way. How can I accomplish this?
I apologize if I'm oversimplifying your question, but if the main issue is how to execute all SQL files without having to call them one by one, you just need to put them in a loop, e.g. in bash calling psql
#!/bin/bash
for f in *.sql
do
psql -h dbhost -d db -U dbuser -f $f
done
Or cat them and pipe the result to psql stdin:
$ cat /path/to/files/*.sql | psql -h dbhost -d db -U dbuser
And if you need them to run in a single transaction, consider merging the SQL files, e.g. using cat - this assumes all statements in your sql file are properly terminated:
$ cat /path/to/files/*.sql > merged.sql

How to execute postgres' sql queries from batch file?

I need to execute SQL from batch file.
I am executing following to connect to Postgres and select data from table
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
select * from test;
I am able to connect to database, however I'm getting the error
'select' is not recognized as an internal or external command,
operable program or batch file.
Has anyone faced such issue?
This is one of the query i am trying, something similar works in shell script, (please ignore syntax error in the query if there are any)
copy testdata (col1,col2,col3) from '%filepath%/%csv_file%' with csv;
You could pipe it into psql
(
echo select * from test;
) | C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
When closing parenthesis are part of the SQL query they have to be escaped with three carets.
(
echo insert into testconfig(testid,scenarioid,testname ^^^) values( 1,1,'asdf'^^^);
) | psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
Use the -f parameter to pass the batch file name
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME% -f 'sql_batch_file.sql'
http://www.postgresql.org/docs/current/static/app-psql.html
-f filename
--file=filename
Use the file filename as the source of commands instead of reading commands interactively. After the file is processed, psql terminates. This is in many ways equivalent to the meta-command \i.
If filename is - (hyphen), then standard input is read until an EOF indication or \q meta-command. Note however that Readline is not used in this case (much as if -n had been specified).
if running on Linux, this is what worked for me (need to update values below with your user, db name etc)
psql "host=YOUR_HOST port=YOUR_PORT dbname=YOUR_DB_NAME user=YOUR_USER_NAME password=YOUR_PASSWORD" -f "fully_qualified_path_to_your_script.sql"
You cannot put the query on separate line, batch interpreter will assume it's another command instead of a query for psql. I believe you will need to quote it as well.
I agree with Spidey:
1] if you are passing the file with the sql use -f or --file parameter
When you want to execute several commands the best way to do that is to add parameter -f, and after that just type path to your file without any " or ' marks (relative paths works also):
psql -h %host% -p 5432 -U %user% -d %dbname% -f ..\..\folder\Data.txt
It also works in .NET Core. I need it to add basic data to my database after migrations.
Kindly refer to the documentation
1] if you are passing the file with the sql use -f or --file parameter
2] if you are passing individual command use -c or --command parameter
If you are trying the shell script
psql postgresql://$username:$password#$host/$database < /app/sql_script/script.sql

Slick SourceCodeGenerator From SQL File

Is there a way to use the Slick SourceCodeGenerator to generate source code from a file of SQL CREATE statements? I know there is a way to connect to a DB and read in the schema, but I want to cut out that step and just give it the file. Please advise.
Slick ready meta data via jdbc. If you find a jdbc driver that can do that from a SQL file, you may be in luck. Otherwise, why not use an H2 in-memory database? It has compatibility modes for various SQL dialects. They are limited though. Another option would be using something like this: https://github.com/bgranvea/mysql2h2-converter first to produce an H2 compatible schema file.
We used the following script to load a sql schema from a mysql database, convert it to H2 compatible format and then use it in-memory for tests. You should be able to adapt it.
#!/bin/sh
echo ''
export IP=192.168.1.123
export user=foobar
export password=secret
export database=foobar
ping -c 1 $IP &&\
echo "" &&\
echo "Server is reachable"
# dump mysql schema for debuggability (ignore in git)
# convert the mysql to h2db using the converter.
## disable foreign key check in begining and enable it in the end. Prevents foreign key errors
echo "SET FOREIGN_KEY_CHECKS=0;" > foobar-mysql.sql
## Dump the Db structure and remove the auto_increment so as to set the id column back to 1
mysqldump --compact -u $user -h $IP -d $database -p$password\
|sed 's/CONSTRAINT `_*/CONSTRAINT `/g' \
|sed 's/KEY `_*/KEY `/g' \
|sed 's/ AUTO_INCREMENT=[0-9]*//' \
>> foobar-mysql.sql
echo "SET FOREIGN_KEY_CHECKS=1;" >> foobar-mysql.sql &&\
java -jar mysql2h2-converter.jar foobar-mysql.sql \
|perl -0777 -pe 's/([^`]),/\1,\n /g' \
|perl -0777 -pe 's/\)\);/)\n);/g' \
|perl -0777 -pe 's/(CREATE TABLE [^\(]*\()/\1\n /g' \
|sed 's/UNSIGNED/unsigned/g' \
|sed 's/float/real/' \
|sed "s/\(int([0-9]*).*\) DEFAULT '\(.*\)'/\1 DEFAULT \2/" \
|sed "s/tinyint(1)/boolean/" \
> foobar-h2.sql
perl -ne 'print "$ARGV\n" if /.\z/' -- foobar-h2.sql

Import dump/sql file into my postgresql database on Linode

I recently moved my Ruby on Rails 4 app from Heroku to Linode. Everything has been setup correctly, but I need to populate my database with a file, lets call it movies.sql
I am not very familiar with postgresql command and VPS, so having trouble getting this done. I uploaded it to Dropbox since I saw many SO posts that you can use S3/Dropbox.
I saw different commands like this (unsure how to go about it in my situation):
psql -U postgres -d testdb -f /home/you/file.sql
psql -f file.sql dbname
psql -U username -d myDataBase -a -f myInsertFile
So which is the correct one in my situation and how to run when I SSH in Linode? Thanks
You'll need to get the file onto your server or you'll need to use a different command from your terminal.
If you have the file locally, you can restore without sshing in using the psql command:
psql -h <user#ip_address_of_server> -U <database_username> -d <name_of_the_database> -f local/path/to/your/file.sql
Otherwise, the command is:
psql -U <database_username> -d <name_of_the_database> < remote/path/to/your/file.sql
-U sets the db username, -h sets the host, -d sets the name of the database, and -f tells the command you're restoring from a file.

Copy all keys from one db to another in redis

Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null