How to backup and restore mysql database in rails 3 - ruby-on-rails-3

Is it possible to do the following:
1. Hot database backup of mysql database from Rails 3 application
2. Incremental database backup of mysql database from Rails 3 application
3. Cold database backup of mysql database from Rails 3 application
4. Restore any of the above databases ( Hot, incremental and cold) through Rails 3 application.
Please let me know how to achieve this?
Thanks,
Sudhir C.N.

Setup some cronjobs. I like to use Whenever for writing them. I run this bash script once per day:
#!/bin/bash
BACKUP_FILENAME="APPNAME_production_`date +%s`.gz"
mysqldump -ce -h MYSQL.HOST.COM -u USERNAME -pPASSWORD APPNAME_production | gzip | uuencode $BACKUP_FILENAME | mail -s "daily backup for `date`" webmaster#yourdomain.com
echo -e "\n====\n== Backed up APPNAME_production to $BACKUP_FILENAME on `date` \n====\n"
And output it to cron.log. This may require some tweaking on your end, but it works great once you get it. E-mails the backup to you once per day as a gzipped file, my database is fairly large and file is under 2000kb right now.
It's not the safest technique, so if you're really concerned that someone might get into your e-mail and get access to the backups (which should have sensitive information encrypted anyways), then you'll have to find another solution.
To restore:
gzip -d APPNAME_production_timestamp.gz
mysql -u USERNAME -pPASSWORD APPNAME_production < APPNAME_production_timestamp.sql
or something similar... I don't need to restore often so I don't know this one off the top of my head, but a quick google search should turn up something if this doesn't work.

Related

Proper way to migrate a postgres database?

I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.
This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.

magento migration from one domain to another

I am transferring my magento site from an old domain to a new domain.
I have exported the database file from the old server and I have done all the necessary changes.
Now I'm trying to run the exported file into the new database but sql is stuck at loading for almost an hour.
Please somebody help.
See loading screen attached. Image here
Thank you.
I would suggest making a backup of the whole 'cPanel' and then reimport it back in. This way you won't mess anything up with the database. If you still do need to perform exporting and reimporting back - make sure you disable the key check by adding this before and after your database.
SET foreign_key_checks = 0;
SET foreign_key_checks = 1;
And to successfully import large database you must increase the memory limit on your 'mysql' within '.ini' file.
I wouldn't do it through a GUI interface. Do you have SSH access? If so, here's how you can run it from Command Line which won't be limited by browser processing.
dump:
mysqldump -u '<<insert user>>' -p --single-transaction --database <<database name>> > data_dump.sql
load:
mysql -p -u '<<insert user>>' <<database name>> < data_dump.sql
It's best to do this as the root user so you don't have any trouble.
On import, if you are getting errors that the definer is not a user, you can either create the definer user or you can run this sed command that will replace the definer in your file with the current mySQL user:
sed -E 's/DEFINER=`[^`]+`#`[^`]+`/DEFINER=<<new user name>>/g' data_dump.sql > cleaned_data_dump.sql
As espradley and damek132 said, combine both the answers. Disable foreign key checks, if it's not. But, mostly it's there while exporting the sql dump.
And use the mysql command line through ssh. You should be up and running in half an hour.

How to restore SQL Server database

I am playing with MySQL but reading this post before
https://blog.stackoverflow.com/2011/04/creative-commons-data-dump-apr-11/.
I want to play with this data in SQL Server.
When I download them I found many rar files there. When I extract one of them, I found the xml file but I really do not know how I can restore them.
Can anyone can explain what I need to do to restore them.
You can do this from shell command/command line
$ mysql -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
http://www.webcheatsheet.com/SQL/mysql_backup_restore.php

Iterate over multiple MySQL tables, export 1 table from each

I have around 150 MySQL databases, I need to export 1 table from each of the databases.
Is this possible, username and password are identical for each DB.
I'm sure there's a more compact way to do it but this should work.
#!/bin/bash
mysql -B -e "show databases" | egrep -v "Database|information_schema" | while read db;
do
echo "$db";
mysqldump $db TableName > $db.sql
done
You may need to tweak the mysql and mysqldump calls depending on your connection information.
I think in this case, iteration would be more appropriate (rather than recursion).
If you are on Linux, I'd suggest writing a simple bash script that cycles the 150 DB URLs and calls mysqldump on each one.
See link text, it generates meta data for all databases and all tables, you may be able to adapt it to export data for you. However this is in PHP and I am not certain of the language you wish to use..

Backup MySQL database

I have a MySQL Database of about 1.7GB. I usually back it up using mysqldump and this takes about 2 minutes. However, I would like to know the answers to the following questions:
Does mysqldump block read and/or write operations to the database? Because in a live scenario, I would not want to block users from using the database while it is being backed up.
It would be ideal for me to only backup the WHOLE database once in, say, a week, but in the intermediate days only one table needs to be backed up as the others won't change. Is there a way to achieve this?
Is mysqlhotcopy a better alternative for these purposes?
mysqlhotcopy does not work in certain cases where the readlock is lost,
and does not work with INNODB tables.
mysqldump is more used because it can back up all kinds of tables.
From MySQL documentation
mysqlhotcopy is a Perl script that was originally written and contributed by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES, and cp or scp to make a database backup quickly. It is the fastest way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables. It runs on Unix and NetWare
The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
Bye.
1) mysqldump only blocks when you ask it to (one of the --lock-tables, --lock-all-tables, --single-transaction). but if you want your backup to be consistent then mysqldump should block (using --single-transaction or --lock-all-tables) or you might get an inconsistent database snapshot. Note: --single-transaction works only for InnoDB.
2) sure, just enumerate the tables you want to be backed up after the database name:
mysqldump OPTIONS DATABASE TABLE1 TABLE2 ...
Alternatively you can exclude the tables you don't want:
mysqldump ... --ignore-table=TABLE1 --ignore-table=TABLE2 .. DATABASE
So you can do a whole database dump once a week and backup only the changing tables once a day.
3) mysqlhotcopy inly works on MyISAM tables and in most applications you are better off with InnoDB. There are commercial tools (quite expensive) for hotbackup of innodb tables. Lately there is also the new opensource one for this purpose - Xtrabackup
Also, to automate the process you can use astrails-safe. It supports database backup with mysqldump and filesystem with tar. +encryption +upload to S3, +many other goodies. There is no xtrabackup support yet, but it should be easy to add if this is what you need.
Adding a mysql slave to your setup would allow you to take consistant backups without locking the production database.
Adding a slave also gives you a binary log of changes. A dump is a snapshot of the database at the time you took the dump. The binary log contains all statements that modified the data along with a timestamp.
If you have a failure in the middle of the day and your only taking backups once a day, you've lost a half a days worth of work. With binary logs and mysqldump, you could restore from the previous day and 'play' the logs forward to the point of failure.
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
If your running MySQL on a linux server with LVM disks or a windows server with VSS, you should check out Zamanda.
It takes binary diffs of the data on disk, which is much faster to read and restore than a text dump of the database.
No, you can specify tables to be locked using --lock-tables but they aren't by default
If you don't specify any tables then the whole DB is backed up, or you can specify a list of tables :
mysqldump [options] db_name [tables]
Not used it sorry, however I run a number of MySQL DBs, some bigger some smaller than 1.7gb and I use mysqldump for all my backups.
Maatkit dump might be useful.
http://www.maatkit.org/doc/mk-parallel-dump.htmlhttp://www.maatkit.org/doc/mk-parallel-dump.html
For mysql and PHP try this
This will also remove files after n days
$dbhost = 'localhost';
$dbuser = 'xxxxx';
$dbpass = 'xxxxx';
$dbname = 'database1';
$folder = 'backups/'; // Name of folder you want to place the file
$filename = $dbname . date("Y-m-d-H-i-s") . ".sql";
$remove_days = 7; // Number of days that the file will stay on the server
$command="mysqldump --host=$dbhost --user=$dbuser --password=$dbpass $dbname > $folder$filename";
system($command);
$files = (glob("$folder"."*.sql"));
foreach($files as $file) {
if(is_file($file)
&& time() - filemtime($file) >= $remove_days*24*60*60) { // 2 days = 2*24*60*60
unlink($file);
echo "$file removed \n";
} else { echo "$file was last modified: " . date ("F d Y H:i:s.", filemtime($file)) . "\n"; }
}