Backup MySQL database - backup

I have a MySQL Database of about 1.7GB. I usually back it up using mysqldump and this takes about 2 minutes. However, I would like to know the answers to the following questions:
Does mysqldump block read and/or write operations to the database? Because in a live scenario, I would not want to block users from using the database while it is being backed up.
It would be ideal for me to only backup the WHOLE database once in, say, a week, but in the intermediate days only one table needs to be backed up as the others won't change. Is there a way to achieve this?
Is mysqlhotcopy a better alternative for these purposes?

mysqlhotcopy does not work in certain cases where the readlock is lost,
and does not work with INNODB tables.
mysqldump is more used because it can back up all kinds of tables.
From MySQL documentation
mysqlhotcopy is a Perl script that was originally written and contributed by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES, and cp or scp to make a database backup quickly. It is the fastest way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables. It runs on Unix and NetWare
The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
Bye.

1) mysqldump only blocks when you ask it to (one of the --lock-tables, --lock-all-tables, --single-transaction). but if you want your backup to be consistent then mysqldump should block (using --single-transaction or --lock-all-tables) or you might get an inconsistent database snapshot. Note: --single-transaction works only for InnoDB.
2) sure, just enumerate the tables you want to be backed up after the database name:
mysqldump OPTIONS DATABASE TABLE1 TABLE2 ...
Alternatively you can exclude the tables you don't want:
mysqldump ... --ignore-table=TABLE1 --ignore-table=TABLE2 .. DATABASE
So you can do a whole database dump once a week and backup only the changing tables once a day.
3) mysqlhotcopy inly works on MyISAM tables and in most applications you are better off with InnoDB. There are commercial tools (quite expensive) for hotbackup of innodb tables. Lately there is also the new opensource one for this purpose - Xtrabackup
Also, to automate the process you can use astrails-safe. It supports database backup with mysqldump and filesystem with tar. +encryption +upload to S3, +many other goodies. There is no xtrabackup support yet, but it should be easy to add if this is what you need.

Adding a mysql slave to your setup would allow you to take consistant backups without locking the production database.
Adding a slave also gives you a binary log of changes. A dump is a snapshot of the database at the time you took the dump. The binary log contains all statements that modified the data along with a timestamp.
If you have a failure in the middle of the day and your only taking backups once a day, you've lost a half a days worth of work. With binary logs and mysqldump, you could restore from the previous day and 'play' the logs forward to the point of failure.
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
If your running MySQL on a linux server with LVM disks or a windows server with VSS, you should check out Zamanda.
It takes binary diffs of the data on disk, which is much faster to read and restore than a text dump of the database.

No, you can specify tables to be locked using --lock-tables but they aren't by default
If you don't specify any tables then the whole DB is backed up, or you can specify a list of tables :
mysqldump [options] db_name [tables]
Not used it sorry, however I run a number of MySQL DBs, some bigger some smaller than 1.7gb and I use mysqldump for all my backups.

Maatkit dump might be useful.
http://www.maatkit.org/doc/mk-parallel-dump.htmlhttp://www.maatkit.org/doc/mk-parallel-dump.html

For mysql and PHP try this
This will also remove files after n days
$dbhost = 'localhost';
$dbuser = 'xxxxx';
$dbpass = 'xxxxx';
$dbname = 'database1';
$folder = 'backups/'; // Name of folder you want to place the file
$filename = $dbname . date("Y-m-d-H-i-s") . ".sql";
$remove_days = 7; // Number of days that the file will stay on the server
$command="mysqldump --host=$dbhost --user=$dbuser --password=$dbpass $dbname > $folder$filename";
system($command);
$files = (glob("$folder"."*.sql"));
foreach($files as $file) {
if(is_file($file)
&& time() - filemtime($file) >= $remove_days*24*60*60) { // 2 days = 2*24*60*60
unlink($file);
echo "$file removed \n";
} else { echo "$file was last modified: " . date ("F d Y H:i:s.", filemtime($file)) . "\n"; }
}

Related

How to extract stored functions and procedures from mysqldump file

So I upgraded to MariaDB 10.2 slightly haphazardly and lost my stored functions and procedures (no idea why). Luckily I do have weekly backups (mysqldump) but I don't want to rebuild the whole DB again.
There are some clever options out there - like upload the old DB backup into a new database on your cluster, then copy the functions across, but I thought easiest thing was to extract just the functions and procs from the mysqldump file. Here is my solution, hopefully you may find it useful, or improve upon it...
gawk '/Dumping routines for/,/Dump completed/{print}' backupfile.sql > foo1.sql
Then you can import back into the DB in the normal way...
mysql -u<user> -p<psw> DBNAME < foo1.sql

Incrementally importing data to a PostgreSQL database

Situation:
I have a PostgreSQL-database that is logging data from sensors in a field-deployed unit (let's call this the source database). The unit has a very limited hard-disk space, meaning that if left untouched, the data-logging will cause the disk where the database is residing to fill up within a week. I have a (very limited) network link to the database (so I want to compress the dump-file), and on the other side of said link I have another PostgreSQL database (let's call that the destination database) that has a lot of free space (let's just, for argument's sake, say that the source is very limited with regard to space, and the destination is unlimited with regard to space).
I need to take incremental backups of the source database, append the rows that have been added since last backup to the destination database, and then clean out the added rows from the source database.
Now the source database might or might not have been cleaned since a backup was last taken, so the destination database needs to be able to only imported the new rows in an automated (scripted) process, but pg_restore fails miserably when trying to restore from a dump that has the same primary key numbers as the destination database.
So the question is:
What is the best way to restore only the rows from a source that are not already in the destination database?
The only solution that I've come up with so far is to pg_dump the database and restore the dump to a new secondary-database on the destination-side with pg_restore, then use simple sql to sort out which rows already exist in my main-destination database. But it seems like there should be a better way...
(extra question: Am I completely wrong in using PostgreSQL in such an application? I'm open to suggestions for other data-collection alternatives...)
A good way to start would probably be to use the --inserts option to pg_dump. From the documentation (emphasis mine) :
Dump data as INSERT commands (rather than COPY). This will make
restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
I don't have the means to test it right now with pg_restore, but this might be enough for your case.
You could also use the fact that from the version 9.5, PostgreSQL provides ON CONFLICT DO ... for INSERTs. Use a simple scripting language to add these to the dump and you should be fine. I haven't found an option for pg_dump to add those automatically, unfortunately.
You might google "sporadically connected database synchronization" to see related solutions.
It's not a neatly solved problem as far as I know - there are some common work-arounds, but I am not aware of a database-centric out-of-the-box solution.
The most common way of dealing with this is to use a message bus to move events between your machines. For instance, if your "source database" is just a data store, with no other logic, you might get rid of it, and use a message bus to say "event x has occurred", and point the endpoint of that message bus at your "destination machine", which then writes that to your database.
You might consider Apache ActiveMQ or read "Patterns of enterprise integration".
#!/bin/sh
PSQL=/opt/postgres-9.5/bin/psql
TARGET_HOST=localhost
TARGET_DB=mystuff
TARGET_SCHEMA_IMPORT=copied
TARGET_SCHEMA_FINAL=final
SOURCE_HOST=192.168.0.101
SOURCE_DB=slurpert
SOURCE_SCHEMA=public
########
create_local_stuff()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG0
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_IMPORT};
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_FINAL};
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_FINAL}.topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_IMPORT}.tmp_topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
OMG0
}
########
find_highest()
{
${PSQL} -q -t -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG1
SELECT MAX(topic_id) FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic;
OMG1
}
########
fetch_new_data()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG2
\COPY (SELECT topic_id, topic_date, topic_body FROM ${SOURCE_SCHEMA}.topic WHERE topic_id >${watermark}) TO '/tmp/topic.dat';
OMG2
}
########
insert_new_data()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG3
DELETE FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic WHERE 1=1;
COPY ${TARGET_SCHEMA_IMPORT}.tmp_topic(topic_id, topic_date, topic_body) FROM '/tmp/topic.dat';
INSERT INTO ${TARGET_SCHEMA_FINAL}.topic(topic_id, topic_date, topic_body)
SELECT topic_id, topic_date, topic_body
FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic src
WHERE NOT EXISTS (
SELECT *
FROM ${TARGET_SCHEMA_FINAL}.topic nx
WHERE nx.topic_id = src.topic_id
);
OMG3
}
########
delete_below_watermark()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG4
-- delete not yet activated; COUNT(*) instead
-- DELETE
SELECT COUNT(*)
FROM ${SOURCE_SCHEMA}.topic WHERE topic_id <= ${watermark}
;
OMG4
}
######## Main
#create_local_stuff
watermark="`find_highest`"
echo 'Highest:' ${watermark}
fetch_new_data ${watermark}
insert_new_data
echo 'Delete below:' ${watermark}
delete_below_watermark ${watermark}
# Eof
This is just an example. Some notes:
I assume a non-decreasing serial PK for the table; in most cases it could also be a timestamp
for simplicity, all the queries are run as user postgres, you might need to change this
the watermark method will guarantee that only new records will be transmitted, minimising bandwidth usage
the method is atomic, if the script crashes, nothing is lost
only one table is fetched here, but you could add more
because I'm paranoid, I us a different name for the staging table and put it into a separate schema
The whole script does two queries on the remote machine (one for fetch one for delete); you could combine these.
but there is only one script (executing from the local=target machine) involved.
The DELETE is not yet active; it only does a count(*)

Importing MySQL tables from other database in live site with mysqldump can cause trouble?

Scenario: I want to replicate MySQL tables from one database to other database.
Possible best solution: May be to use MySQL Replication feature.
Current solution on what I'm working as workaround (mysqldump) because can't spend time to learn about Replication in current deadline.
So currently I'm using command like this:
mysqldump -u user1 -ppassword1 --single-transaction SourceDb TblName | mysql -u user2 -ppassword2 DestinationDB
Based on some tests, it seems to be working fine.
While running above command, I run ab command with 1000 requests on destination site and tried accessing the site from browser also.
My concern is for destination live site on which we are importing data with whole table (which will internally drop existing table and create new one with new data).
Can I be sure that live site won't break while this process or is there any risk factor?
If yes then can that be resolved?
As such you already admitted replication is the best solution here, I'd agree to that.
You said you have 1000 requests on "Destination" side? Are these 1000 connections to Destination read-only?
Ofcourse dropping and recreating table isn't a right choice here for active connections.
Can suggest one improvement. Instead of directly loading to table, load to different database and swap tables. This should be quicker as far as connections to Destination database/tables are concerned.
create new table different database
mysqldump -u user1 -ppassword1 --single-transaction -hSOURCE_HOST SourceDb TblName | mysql -uuser2 -ppassword2 -hDESTINATION_HOST DB_New
(Are you sure you don't have "-h " here?)
Swap the tables
rename table DB.TblName to DB.old_TblName, DB_New.new_TblName to DestinationDB.TblName;
If you're on same host (which I dont think so), you might want to use pt-online-schema-change and swap tables!

will mysql dump break replication?

I have 2 databses X "production" and Y "testing
Database X should be identical to Y in structure. However, they are not because I mad many changes to the production.
Now, I need to somehow to export X and import it into Y without breaking any replications.
I am thinking to do mysql dump but I don't want to case any issue to replication this is why I am asking this question to confirm.
Here are the step that I want to follow
back up production. (ie. mysqldump -u root -p --triggers --routines X > c:/Y.sql)
Restore it. (ie. mysql -u root -p Y < c:\Y.sql)
Will this cause any issues to the replication?
I believe the dump will execute everything and save it into it's bin log and the slave will be able to see it and replicate it with no problems.
Is what I am trying to do correct? will it cause any replication issues?
thanks
Yes, backing up from X and restoring to Y is a normal operation. We often call this "reinitializing the replica."
This does interrupt replication. There's no reliable way to restore the data at the same time as letting the replica continue to apply changes, because the changes the replica is processing are not in sync with the snapshot of data represented by the backup. You could overwrite changed data, or miss changes, and this would make the replica totally out of sync.
So you have to stop replication on the replica while you restore.
Here are the steps for a typical replica reinitialization:
mysqldump from the master, with the --master-data option so the dump includes the binary log position that was current at the moment of the dump.
Stop replication on the replica.
Restore the dump on the replica.
Use CHANGE MASTER to alter what binary log coordinates the replica starts at. Use the coordinates that were saved in the dump.
Start replication on the replica.
Re your comment:
Okay, I understand what you need better now.
Yes, there's an option for mysqldump --no-data so the output only includes the CREATE TABLE and other DDL, but no INSERT statements with data. Then you can import that to a separate database on the same server. And you're right, by default DDL statements are added to the binary log, so any replication replicas will automatically run the same statements.
You can even do the export & import in just two steps like this:
$ mysqladmin create newdatabase
$ mysqldump --no-data olddatabase | mysql newdatabase

SQL, moving million records from a database to other database

I am a C# developer, I am not really good with SQL. I have a simple questions here. I need to move more than 50 millions records from a database to other database. I tried to use the import function in ms SQL, however it got stuck because the log was full (I got an error message The transaction log for database 'mydatabase' is full due to 'LOG_BACKUP'). The database recovery model was set to simple. My friend said that importing millions records using task->import data will cause the log to be massive and told me to use loop instead to transfer the data, does anyone know how and why? thanks in advance
If you are moving the entire database, use backup and restore, it will be the quickest and easiest.
http://technet.microsoft.com/en-us/library/ms187048.aspx
If you are just moving a single table read about and use the BCP command line tools for this many records:
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data file in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server tables or to export data out of tables into data files. Except when used with the queryout option, the utility requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for that table or understand the structure of the table and the types of data that are valid for its columns.
http://technet.microsoft.com/en-us/library/ms162802.aspx
The fastest and probably most reliable way is to bulk copy the data out via SQL Server's bcp.exe utility. If the schema on the destination database is exactly identical to that on the source database, including nullability of columns, export it in "native format":
http://technet.microsoft.com/en-us/library/ms191232.aspx
http://technet.microsoft.com/en-us/library/ms189941.aspx
If the schema differs between source and target, you will encounter...interesting (yes, interesting is a good word for it) problems.
If the schemas differ or you need to perform any transforms on the data, consider using text format. Or another format (BCP lets you create and use a format file to specify the format of the data for export/import).
You might consider exporting data in chunks: if you encounter problems it gives you an easier time of restarting without losing all the work done so far.
You might also consider zipping the exported data files up to minimize time on the wire.
Then FTP the files over to the destination server.
bcp them in. You can use the bcp utility on the destination server for the BULK IMPORT statement in SQL Server to do the work. Makes no real difference.
The nice thing about using BCP to load the data is that the load is what is described as a 'non-logged' transaction, though it's really more like a 'minimally logged' transaction.
If the tables on the destination server have IDENTITY columns, you'll need to use SET IDENTITY statement to disable the identity column on the the table(s) involved for the nonce (don't forget to reenable it). After your data is imported, you'll need to run DBCC CHECKIDENT to get things back in synch.
And depending on what your doing, it can sometimes be helpful to put the database in single-user mode or dbo-only mode for the duration of the surgery: http://msdn.microsoft.com/en-us/library/bb522682.aspx
Another approach I've used to great effect is to use Perl's DBI/DBD modules (which provide access to the bulk copy interface) and write a perl script to suck out the data from the source server, transform it and bulk load it directly into the destination server, without having to save it to disk and move it. Also means you can trap errors and design things for recovery and restart right at the point of failure.
Use BCP to migrate data.
Another approach i have used in the past is to take a backup of the transaction log and shrink the log Prior to the migration. Split the migration script in parts and run the log backup- shrink - migrate iteration a few times.