Importing MySQL tables from other database in live site with mysqldump can cause trouble? - replication

Scenario: I want to replicate MySQL tables from one database to other database.
Possible best solution: May be to use MySQL Replication feature.
Current solution on what I'm working as workaround (mysqldump) because can't spend time to learn about Replication in current deadline.
So currently I'm using command like this:
mysqldump -u user1 -ppassword1 --single-transaction SourceDb TblName | mysql -u user2 -ppassword2 DestinationDB
Based on some tests, it seems to be working fine.
While running above command, I run ab command with 1000 requests on destination site and tried accessing the site from browser also.
My concern is for destination live site on which we are importing data with whole table (which will internally drop existing table and create new one with new data).
Can I be sure that live site won't break while this process or is there any risk factor?
If yes then can that be resolved?

As such you already admitted replication is the best solution here, I'd agree to that.
You said you have 1000 requests on "Destination" side? Are these 1000 connections to Destination read-only?
Ofcourse dropping and recreating table isn't a right choice here for active connections.
Can suggest one improvement. Instead of directly loading to table, load to different database and swap tables. This should be quicker as far as connections to Destination database/tables are concerned.
create new table different database
mysqldump -u user1 -ppassword1 --single-transaction -hSOURCE_HOST SourceDb TblName | mysql -uuser2 -ppassword2 -hDESTINATION_HOST DB_New
(Are you sure you don't have "-h " here?)
Swap the tables
rename table DB.TblName to DB.old_TblName, DB_New.new_TblName to DestinationDB.TblName;
If you're on same host (which I dont think so), you might want to use pt-online-schema-change and swap tables!

Related

Incrementally importing data to a PostgreSQL database

Situation:
I have a PostgreSQL-database that is logging data from sensors in a field-deployed unit (let's call this the source database). The unit has a very limited hard-disk space, meaning that if left untouched, the data-logging will cause the disk where the database is residing to fill up within a week. I have a (very limited) network link to the database (so I want to compress the dump-file), and on the other side of said link I have another PostgreSQL database (let's call that the destination database) that has a lot of free space (let's just, for argument's sake, say that the source is very limited with regard to space, and the destination is unlimited with regard to space).
I need to take incremental backups of the source database, append the rows that have been added since last backup to the destination database, and then clean out the added rows from the source database.
Now the source database might or might not have been cleaned since a backup was last taken, so the destination database needs to be able to only imported the new rows in an automated (scripted) process, but pg_restore fails miserably when trying to restore from a dump that has the same primary key numbers as the destination database.
So the question is:
What is the best way to restore only the rows from a source that are not already in the destination database?
The only solution that I've come up with so far is to pg_dump the database and restore the dump to a new secondary-database on the destination-side with pg_restore, then use simple sql to sort out which rows already exist in my main-destination database. But it seems like there should be a better way...
(extra question: Am I completely wrong in using PostgreSQL in such an application? I'm open to suggestions for other data-collection alternatives...)
A good way to start would probably be to use the --inserts option to pg_dump. From the documentation (emphasis mine) :
Dump data as INSERT commands (rather than COPY). This will make
restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
I don't have the means to test it right now with pg_restore, but this might be enough for your case.
You could also use the fact that from the version 9.5, PostgreSQL provides ON CONFLICT DO ... for INSERTs. Use a simple scripting language to add these to the dump and you should be fine. I haven't found an option for pg_dump to add those automatically, unfortunately.
You might google "sporadically connected database synchronization" to see related solutions.
It's not a neatly solved problem as far as I know - there are some common work-arounds, but I am not aware of a database-centric out-of-the-box solution.
The most common way of dealing with this is to use a message bus to move events between your machines. For instance, if your "source database" is just a data store, with no other logic, you might get rid of it, and use a message bus to say "event x has occurred", and point the endpoint of that message bus at your "destination machine", which then writes that to your database.
You might consider Apache ActiveMQ or read "Patterns of enterprise integration".
#!/bin/sh
PSQL=/opt/postgres-9.5/bin/psql
TARGET_HOST=localhost
TARGET_DB=mystuff
TARGET_SCHEMA_IMPORT=copied
TARGET_SCHEMA_FINAL=final
SOURCE_HOST=192.168.0.101
SOURCE_DB=slurpert
SOURCE_SCHEMA=public
########
create_local_stuff()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG0
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_IMPORT};
CREATE SCHEMA IF NOT EXISTS ${TARGET_SCHEMA_FINAL};
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_FINAL}.topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
CREATE TABLE IF NOT EXISTS ${TARGET_SCHEMA_IMPORT}.tmp_topic
( topic_id INTEGER NOT NULL PRIMARY KEY
, topic_date TIMESTAMP WITH TIME ZONE
, topic_body text
);
OMG0
}
########
find_highest()
{
${PSQL} -q -t -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG1
SELECT MAX(topic_id) FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic;
OMG1
}
########
fetch_new_data()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG2
\COPY (SELECT topic_id, topic_date, topic_body FROM ${SOURCE_SCHEMA}.topic WHERE topic_id >${watermark}) TO '/tmp/topic.dat';
OMG2
}
########
insert_new_data()
{
${PSQL} -h ${TARGET_HOST} -U postgres ${TARGET_DB} <<OMG3
DELETE FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic WHERE 1=1;
COPY ${TARGET_SCHEMA_IMPORT}.tmp_topic(topic_id, topic_date, topic_body) FROM '/tmp/topic.dat';
INSERT INTO ${TARGET_SCHEMA_FINAL}.topic(topic_id, topic_date, topic_body)
SELECT topic_id, topic_date, topic_body
FROM ${TARGET_SCHEMA_IMPORT}.tmp_topic src
WHERE NOT EXISTS (
SELECT *
FROM ${TARGET_SCHEMA_FINAL}.topic nx
WHERE nx.topic_id = src.topic_id
);
OMG3
}
########
delete_below_watermark()
{
watermark=${1-0}
echo ${watermark}
${PSQL} -h ${SOURCE_HOST} -U postgres ${SOURCE_DB} <<OMG4
-- delete not yet activated; COUNT(*) instead
-- DELETE
SELECT COUNT(*)
FROM ${SOURCE_SCHEMA}.topic WHERE topic_id <= ${watermark}
;
OMG4
}
######## Main
#create_local_stuff
watermark="`find_highest`"
echo 'Highest:' ${watermark}
fetch_new_data ${watermark}
insert_new_data
echo 'Delete below:' ${watermark}
delete_below_watermark ${watermark}
# Eof
This is just an example. Some notes:
I assume a non-decreasing serial PK for the table; in most cases it could also be a timestamp
for simplicity, all the queries are run as user postgres, you might need to change this
the watermark method will guarantee that only new records will be transmitted, minimising bandwidth usage
the method is atomic, if the script crashes, nothing is lost
only one table is fetched here, but you could add more
because I'm paranoid, I us a different name for the staging table and put it into a separate schema
The whole script does two queries on the remote machine (one for fetch one for delete); you could combine these.
but there is only one script (executing from the local=target machine) involved.
The DELETE is not yet active; it only does a count(*)

Proper way to migrate a postgres database?

I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.
This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.

mysql optimized query execution

As part of an ongoing research work, I am checking if an URL exists or not using the cURL command. I have been executing a shell script for couple of days and it is doing some updates for each URL in my database. However, the script seems to update around only 100,000 rows in a day.
I was thinking if I could write the values in a file first and then do the updates, the execution might be faster.
I am connecting to the database using the command line.
mysql -h servername -u username -ppassword databasename "Update Query"
For example, instead of connecting to the database 2 million times like above from the command line and updating 2 million rows, I am planning to connect to the database only once from the command line and update 2 million rows from the file.
So is the second approach better than the first one or the time difference would be negligible?
Three approaches.
You could using load data infile
You could build up a .sql file with all of the updates you need.
You could use something other than a CLI to connect to the URLs and DB. In other words, not using "curl" and "mysql" commands, but using a real programming language and provided libraries for checking URLs and updating databases.
Any of those would probably be faster. Though you'll likely get more speed improvement by making the http calls in parallel. You can do that more easily with a real programming language.

Backup MySQL database

I have a MySQL Database of about 1.7GB. I usually back it up using mysqldump and this takes about 2 minutes. However, I would like to know the answers to the following questions:
Does mysqldump block read and/or write operations to the database? Because in a live scenario, I would not want to block users from using the database while it is being backed up.
It would be ideal for me to only backup the WHOLE database once in, say, a week, but in the intermediate days only one table needs to be backed up as the others won't change. Is there a way to achieve this?
Is mysqlhotcopy a better alternative for these purposes?
mysqlhotcopy does not work in certain cases where the readlock is lost,
and does not work with INNODB tables.
mysqldump is more used because it can back up all kinds of tables.
From MySQL documentation
mysqlhotcopy is a Perl script that was originally written and contributed by Tim Bunce. It uses LOCK TABLES, FLUSH TABLES, and cp or scp to make a database backup quickly. It is the fastest way to make a backup of the database or single tables, but it can be run only on the same machine where the database directories are located. mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables. It runs on Unix and NetWare
The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
Bye.
1) mysqldump only blocks when you ask it to (one of the --lock-tables, --lock-all-tables, --single-transaction). but if you want your backup to be consistent then mysqldump should block (using --single-transaction or --lock-all-tables) or you might get an inconsistent database snapshot. Note: --single-transaction works only for InnoDB.
2) sure, just enumerate the tables you want to be backed up after the database name:
mysqldump OPTIONS DATABASE TABLE1 TABLE2 ...
Alternatively you can exclude the tables you don't want:
mysqldump ... --ignore-table=TABLE1 --ignore-table=TABLE2 .. DATABASE
So you can do a whole database dump once a week and backup only the changing tables once a day.
3) mysqlhotcopy inly works on MyISAM tables and in most applications you are better off with InnoDB. There are commercial tools (quite expensive) for hotbackup of innodb tables. Lately there is also the new opensource one for this purpose - Xtrabackup
Also, to automate the process you can use astrails-safe. It supports database backup with mysqldump and filesystem with tar. +encryption +upload to S3, +many other goodies. There is no xtrabackup support yet, but it should be easy to add if this is what you need.
Adding a mysql slave to your setup would allow you to take consistant backups without locking the production database.
Adding a slave also gives you a binary log of changes. A dump is a snapshot of the database at the time you took the dump. The binary log contains all statements that modified the data along with a timestamp.
If you have a failure in the middle of the day and your only taking backups once a day, you've lost a half a days worth of work. With binary logs and mysqldump, you could restore from the previous day and 'play' the logs forward to the point of failure.
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
If your running MySQL on a linux server with LVM disks or a windows server with VSS, you should check out Zamanda.
It takes binary diffs of the data on disk, which is much faster to read and restore than a text dump of the database.
No, you can specify tables to be locked using --lock-tables but they aren't by default
If you don't specify any tables then the whole DB is backed up, or you can specify a list of tables :
mysqldump [options] db_name [tables]
Not used it sorry, however I run a number of MySQL DBs, some bigger some smaller than 1.7gb and I use mysqldump for all my backups.
Maatkit dump might be useful.
http://www.maatkit.org/doc/mk-parallel-dump.htmlhttp://www.maatkit.org/doc/mk-parallel-dump.html
For mysql and PHP try this
This will also remove files after n days
$dbhost = 'localhost';
$dbuser = 'xxxxx';
$dbpass = 'xxxxx';
$dbname = 'database1';
$folder = 'backups/'; // Name of folder you want to place the file
$filename = $dbname . date("Y-m-d-H-i-s") . ".sql";
$remove_days = 7; // Number of days that the file will stay on the server
$command="mysqldump --host=$dbhost --user=$dbuser --password=$dbpass $dbname > $folder$filename";
system($command);
$files = (glob("$folder"."*.sql"));
foreach($files as $file) {
if(is_file($file)
&& time() - filemtime($file) >= $remove_days*24*60*60) { // 2 days = 2*24*60*60
unlink($file);
echo "$file removed \n";
} else { echo "$file was last modified: " . date ("F d Y H:i:s.", filemtime($file)) . "\n"; }
}

reload a .sql schema without restarting mysqld

Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart.
When you say "reload a schema file", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.?
The solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do:
cat myschemafile.sql | mysql -u userid -p databasename