Easiest way to copy a MySQL database? - sql

Does anyone know of an easy way to copy a database from one computer to a file, and then import it on another computer?

Here are a few options:
mysqldump
The easiest, guaranteed-to-work way to do it is to use mysqldump. See the manual pages for the utility here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Basically, it dumps the SQL scripts required to rebuild the contents of the database, including creation of tables, triggers, and other objects and insertion of the data (it's all configurable, so if you already have the schema set up somewhere else, you can just dump the data, for example).
Copying individual MyISAM table files
If you have a large amount of data and you are using the MyISAM storage engine for the tables that you want to copy, you can just shut down mysqld and copy the .frm, .myd, and .myi files from one database folder to another (even on another system). This will not work for InnoDB tables, and may or may not work for other storage engines (with which I am less familiar).
mysqlhotcopy
If you need to dump the contents of a database while the database server is running, you can use mysqlhotcopy (note that this only works for MyISAM and Archive tables):
http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html
Copying the entire data folder
If you are copying the entire database installation, so, all of the databases and the contents of every database, you can just shut down mysqld, zip up your entire MySQL data directory, and copy it to the new server's data directory.
This is the only way (that I know of) to copy InnoDB files from one instance to another. This will work fine if you're moving between servers running the same OS family and the same version of MySQL; it may work for moving between operating systems and/or versions of MySQL; off the top of my head, I don't know.

You may very well use SQL yog - a product of web yog.. it uses similar techniques mentioned above but gives you a good GUI making you know what you are doing. You can get a community project of the same or a trial version from site
http://www.webyog.com/en/downloads.php#sqlyog
This has option for creating backups to a file and restoring the file into new server. Even better option of exporting database from one server to another is there..
Cheers,
RDJ

Related

Database on USB device

Currently I use SQLite on USB storage for archival and backup of data. But I like to upgrade the whole application on a real DBMS most likely MariaDB.
Edit: What I currently do is to open/create a SQLite file on the USB drive, access and have the option to access the original database in parallel.
Is there any option to hang in an exported database in an already running DBMS instance?
Thanks
I currently run a small database with MariaDB RDBMS. The database files are kept on USB drive, which I keep plugged in permanently.
If I understand correctly OP wants to know if it would be possible to have a database on the main drive and be able to configure the RDBMS to also see the backup database when the pendrive is plugged in.
The answer is "Possible, but impractical".
Why:
To work on files form outside of the data directory one needs to create sybolic links to those files IN the data directory
When the pendrive is not plugged in, the symlinks still exist and RDBMS might error out, especially when there is a query trying to acces that data.
If one wants to re-create the links on usb stick insert, then to use those the whole RDBMS must be restarted.
A much more robust, convenient and portable solution would be to
keep the backup base in SQLite and...
have a script that copies data from MariaDB to SQLite, for example in Python.
Such a script can then be set up to either run periodically and exit gracefully when pendrive is not inserted, or to be ran on detection of pendrive.
Another way to backup data is to just use mysqldump, as pointed out in comments to Original Question, but simple as it may be it takes away the ability to query the backup data without the (sometimes lengthy) process if re-importing.

Progress DB: backup restore and query individual tables

Here is the use-case: we need to backup some of the tables from a client server, copy it to our servers, restore it, then running some queries using ODBC.
I managed to do this process for the entire database by using probkup for backup, prorest for restore and proserve to make it accessible for SQL queries.
However, some of the databases are big (> 8GB), so we are looking for a solution to do the backup for only the tables we need. I didn't find anything with the documentation of probkup how this can be done.
Progress only supports full database backups.
To get the effect that you are looking for you could dump (export) the tables that you want and then load them into an empty database.
"proutil dump" and "proutil load" are where you want to start digging.
The details will vary depending on exactly what you want to do and what resources and capabilities you have available to you.
Another option would be to replicate the tables in question to a partial database. Progress has a product called "pro2" that can help with that. It is usually pointed at SQL targets but you could also point it at a Progress database.
Or, if you have programming skills, you could put together a solution using replication triggers (under the covers that's what pro2 does...)
probkup and prorest are block-level programs and can't do a backup or restore by table.
To do what you're asking for, you'll need to do a dump the data from the source db's tables and then load it into the target db.
If your object is simply to maintain a copy of the DB, you might also try incremental backups. Depending upon your situation, that might speed things up a bit.
Other options include various forms of DB replication, which allow you to keep real- or near-real-time copies of your database.
OpenEdge Replication. With the correct license, you can do query-only access on the replication target, which is good for reporting and analysis.
Third-party replication products. These can be more flexible in terms of both target DBs and limiting the tables to be replicated.
Home-grown replication (by copying and applying AI files). This is not terribly complicated, but you have to factor the cost of doing the work and maintaining the system. There are some scripts out there that can get you started.
Or, as Tom said, you can get clever with replication via triggers.

Backup a mysql database with a large number of tables (>100,000)

I know, really bad database design, but here we are, I have some kind of forum platform (Based on PunBB) and for each forum, I've generated a new set of tables. Really bad idea.
Time has passed and now I have more than 100,000 tables (SHOW TABLES; SELECT FOUND_ROWS(); - 112965 rows in set (1.21 sec)). Performance are great though as the tables do the job of indexes and when you make a direct references to one table, it's ultrafast.
The issue is now I am trying to back everything up and to move to another server. Of course, it take forever. I've launched a mysqldump:
mysqldump --max_allowed_packet=500M --force --opt -u root -pXXXX a > fullbackup.sql
And it's still processing, a little more than 12 hours! The backup is already 546 Mb in size and Mysql is still alive and working.
I've tried to copy directly the Mysql files but I've ran into the issue that a lot of tables has been corrupted.
Any idea to speed this up?
If you are using AWS RDS take a snapshot.
If you are not, use some other snapshot based tool. Percona has one http://www.percona.com/software/percona-xtrabackup/. Using mysqldump to back up large databases is extremely slow
If your source database is already corrupt, that's an independent issue.
If you are copying the database and the copy is corrupt, that is because you are doing a "hot copy" which means that you can't copy a database while it's running without a special "snapshot tool". Even file systems have such tools. You need a consistent set of files.
I presume from the fact that your tables are corrupted when you copy the files that you are using InnoDB.
It says in the MySQL documentation here
Physical backup tools include the mysqlbackup of MySQL Enterprise Backup for InnoDB or any other tables, or file system-level commands (such as cp, scp, tar, rsync) for MyISAM tables.
You can use MySQL Enterprise Backup to perform fast, reliable, physical hot backups (i.e. while the database is running). I believe it is quiet pricey though.
At my last job we ran MySQL instances with over 160,000 tables.
With that many tables, we had to disable innodb_file_per_table and store all the tables in the central tablespace file ibdata1. If we didn't, the server couldn't function efficiently because it had too many open files. This should be easier with MySQL 8.0 but in the old version of MySQL we were using, the data dictionary couldn't scale up to so many tables.
To do backups, we used Percona XtraBackup. This is an open-source tool that works very similarly to MySQL Enterprise Backup. It performs a physical backup of the data directory, but it does it without the risk of file corruption that you caused by copying files directly. Percona XtraBackup works by copying files, but also copying the InnoDB transaction log continually, so the missing bits of the files can be restored. It's very reliable.
Backing up a database with Percona XtraBackup is a little bit faster, but the greater benefit comes when restoring the backup. Restoring a dump file from mysqldump is very slow. Restoring a physical backup like the one produced by Percona XtraBackup can be done as fast as you can copy the backup files into a new data directory, and then start up the MySQL Server.
A recent blog from Percona shows the difference:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/

How to Sql Backup or Mirror database?

We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).

What is the easiest way to copy a database from one Informix IDS 11 Server to another

The source database is quite large. The target database doesn't grow automatically. They are on different machines.
I'm coming from a MS SQL Server, MySQL background and IDS11 seems overly complex (I am sure, with good reason).
One way to move data from one server to another is to backup the database using the dbexport command.
Then after copying the backup files to the destination server run the dbimport command.
To create a new database you need to create the DBSpace for the new database using the onmonitor tool, at this point you could use the existing files from the other server.
You will then need to create the database on the destination server using the dbaccess tool. The dbaccess tool has a database option that allows you to create a database. When creating the database you specify what DBSpace to use.
The source database may be made up of many chunks which you will also need to copy and attach to the new database.
The easiest way is dbexport/dbimport, as others have mentioned.
The fastest way is using onpload, the High Performance Loader. If you have lots of data, but not a ridiculous number of tables, this is definitely worth pursuing. There are some bits and pieces on the IIUG site that may be of assistance in scripting the HPL to generate all the config you'll need.
You have a few choices.
dbexport/dbimport
onunload/onload
HPL (high performance loader) options.
I have personally used onunload/onload and dbexport/dbimport. I have not used HPL. I'm using IDS 10.
onunload/onload IBM docs
Back up the raw database to disk or tape in page size chunks
faster (especially if you go to disk)
Issues if the the database servers are on different operating systems or hardware or if they just have different page sizes.
dbexport/dbimport IBM docs
backup the database in delimited ascii files
writes an ascii schema of the database including all users, tables, views, indexes, etc. Everything about the structure of the database into one huge plain text file.
separate plain text files for each table of the database as well
not so fast
issues on dbimport on any table that has bad data, any view with incorrect syntax, etc. (This can be a good thing, an opportunity to identify and clean)
DO NOT LEAVE THIS TAPE ON THE FRONT SEAT OF YOUR CAR WHEN YOU RUN INTO THE STORE FOR AN ICE CREAM (or you'll be on the news). Also read ... Not a very secure way to be moving data around. :)
Limitation: Requires exclusive access to the source database.
Here is a good place to start in the docs --> Migration of Data Between Database Servers
have you used the export tool ? There used to be a way if you first put the db's into quiescent mode and then you could actually copy the DBSpaces across (dbspaces tool I think... its been a few years now).
Because with informix you used to be able to specify the DBSpaces(s) to used for the table (maybe even in the alter table ?).
Check - dbaccess tool - there is an export command.
Put the DB's into quiesent mode or shut down, copy the dbspaces and then attach table telling it to point to the new dbspaces file. (the dbspaces tool could be worth while looking at.. I have manuals around here. they are 9.2, but it shouldn't have changed too much).
If both the machines use the same version of IDS then another option would be to use ontape to take a backup on one machine one machine and restore on another. You can use the STDIO option and then just stream the backup onto the other machine where the restore could just restore from the STDIO.
From the "Data Replication for High Availability and Distribution" redbook:
ontape -s -L 0 -F | rsh secondary_server "ontape –p"
You could also create a passwordless ssh connection b/w the hosts and transfer in a more secure way.