Proper way to migrate a postgres database? - sql

I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.

This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.

Related

magento migration from one domain to another

I am transferring my magento site from an old domain to a new domain.
I have exported the database file from the old server and I have done all the necessary changes.
Now I'm trying to run the exported file into the new database but sql is stuck at loading for almost an hour.
Please somebody help.
See loading screen attached. Image here
Thank you.
I would suggest making a backup of the whole 'cPanel' and then reimport it back in. This way you won't mess anything up with the database. If you still do need to perform exporting and reimporting back - make sure you disable the key check by adding this before and after your database.
SET foreign_key_checks = 0;
SET foreign_key_checks = 1;
And to successfully import large database you must increase the memory limit on your 'mysql' within '.ini' file.
I wouldn't do it through a GUI interface. Do you have SSH access? If so, here's how you can run it from Command Line which won't be limited by browser processing.
dump:
mysqldump -u '<<insert user>>' -p --single-transaction --database <<database name>> > data_dump.sql
load:
mysql -p -u '<<insert user>>' <<database name>> < data_dump.sql
It's best to do this as the root user so you don't have any trouble.
On import, if you are getting errors that the definer is not a user, you can either create the definer user or you can run this sed command that will replace the definer in your file with the current mySQL user:
sed -E 's/DEFINER=`[^`]+`#`[^`]+`/DEFINER=<<new user name>>/g' data_dump.sql > cleaned_data_dump.sql
As espradley and damek132 said, combine both the answers. Disable foreign key checks, if it's not. But, mostly it's there while exporting the sql dump.
And use the mysql command line through ssh. You should be up and running in half an hour.

Importing MySQL tables from other database in live site with mysqldump can cause trouble?

Scenario: I want to replicate MySQL tables from one database to other database.
Possible best solution: May be to use MySQL Replication feature.
Current solution on what I'm working as workaround (mysqldump) because can't spend time to learn about Replication in current deadline.
So currently I'm using command like this:
mysqldump -u user1 -ppassword1 --single-transaction SourceDb TblName | mysql -u user2 -ppassword2 DestinationDB
Based on some tests, it seems to be working fine.
While running above command, I run ab command with 1000 requests on destination site and tried accessing the site from browser also.
My concern is for destination live site on which we are importing data with whole table (which will internally drop existing table and create new one with new data).
Can I be sure that live site won't break while this process or is there any risk factor?
If yes then can that be resolved?
As such you already admitted replication is the best solution here, I'd agree to that.
You said you have 1000 requests on "Destination" side? Are these 1000 connections to Destination read-only?
Ofcourse dropping and recreating table isn't a right choice here for active connections.
Can suggest one improvement. Instead of directly loading to table, load to different database and swap tables. This should be quicker as far as connections to Destination database/tables are concerned.
create new table different database
mysqldump -u user1 -ppassword1 --single-transaction -hSOURCE_HOST SourceDb TblName | mysql -uuser2 -ppassword2 -hDESTINATION_HOST DB_New
(Are you sure you don't have "-h " here?)
Swap the tables
rename table DB.TblName to DB.old_TblName, DB_New.new_TblName to DestinationDB.TblName;
If you're on same host (which I dont think so), you might want to use pt-online-schema-change and swap tables!

Export DB with PostgreSQL's PgAdmin-III

How to export a Postgresql db into SQL that can be executed into other pgAdmin?
Exporting as backup file, doesn't work when there's a difference in version
Exporting as SQL file, does not execute when tried to run on a different pgAdmin
I tried exporting a DB with pgAdmin III but when I tried to execute the SQL in other pgAdmin it throws error in the SQL, when I tried to "restore" a Backup file, it says there's a difference in version that it can't do the import/restore.
So is there a "safe" way to export a DB into standard SQL that can be executed plainly in pgAdmin SQL editor, regardless of which version it is?
Don't try to use PgAdmin-III for this. Use pg_dump and pg_restore directly if possible.
Use the version of pg_dump from the destination server to dump the origin server. So if you're going from (say) 8.4 to 9.2, you'd use 9.2's pg_dump to create a dump. If you create a -Fc custom format dump (recommended) you can use pg_restore to apply it to the new database server. If you made a regular SQL dump you can apply it with psql.
See the manual on upgrading your PostgreSQL cluster.
Now, if you're trying to downgrade, that's a whole separate mess.
You'll have a hard time creating an SQL dump that'll work in any version of PostgreSQL. Say you created a VIEW that uses a WITH query. That won't work when restored to PostgreSQL 8.3 because it didn't support WITH. There are tons of other examples. If you must support old PostgreSQL versions, do your development on the oldest version you still support and then export dumps of it for newer versions to load. You cannot sanely develop on a new version and export for old versions, it won't work well if at all.
More troubling, developing on an old version won't always give you code that works on the new version either. Occasionally new keywords are added where support for new specification features are introduced. Sometimes issues are fixed in ways that affect user code. For example, if you were to develop on the (ancient and unsupported) 8.2, you'd have lots of problems with implicit casts to text on 8.3 and above.
Your best bet is to test on all supported versions. Consider setting up automated testing using something like Jenkins CI. Yes, that's a pain, but it's the price for software that improves over time. If Pg maintained perfect backward and forward compatibility it'd never improve.
Export/Import with pg_dump and psql
1.Set PGPASSWORD
export PGPASSWORD='123123123';
2.Export DB with pg_dump
pg_dump -h <<host>> -U <<username>> <<dbname>> > /opt/db.out
/opt/db.out is dump path. You can specify of your own.
3.Then set again PGPASSWORD of you another host. If host is same or password is same then this is not required.
4.Import db at your another host
psql -h <<host>> -U <<username>> -d <<dbname>> -f /opt/db.out
If username is different then find and replace with your local username in db.out file. And make sure on username is replaced and not data.
If you still want to use PGAdmin then see procedure below.
Export DB with PGAdmin:
Select DB and click Export.
File Options
Name DB file name for you local directory
Select Format - Plain
Ignore Dump Options #1
Dump Options #2
Check Use Insert Commands
Objects
Uncheck tables if you don't want any
Import DB with PGAdmin:
Create New DB.
By keeping selected DB, Click Menu->Plugins->PSQL Console
Type following command to import DB
\i /path/to/db.sql
If you want to export Schema and Data separately.
Export Schema
File Options
Name schema file at you local directory
Select Format - Plain
Dump Options #1
Check Only Schema
Check Blobs (By default checked)
Export Data
File Options
Name data file at you local directory
Select Format - Plain
Dump Options #1
Check Only Data
Check Blobs (By default checked)
Dump Options #2
Check Use Insert Commands
Check Verbose messages (By default checked)
Note: It takes time to Export/Import based on DB size and with PGAdmin it will add some more time.

How to backup and restore mysql database in rails 3

Is it possible to do the following:
1. Hot database backup of mysql database from Rails 3 application
2. Incremental database backup of mysql database from Rails 3 application
3. Cold database backup of mysql database from Rails 3 application
4. Restore any of the above databases ( Hot, incremental and cold) through Rails 3 application.
Please let me know how to achieve this?
Thanks,
Sudhir C.N.
Setup some cronjobs. I like to use Whenever for writing them. I run this bash script once per day:
#!/bin/bash
BACKUP_FILENAME="APPNAME_production_`date +%s`.gz"
mysqldump -ce -h MYSQL.HOST.COM -u USERNAME -pPASSWORD APPNAME_production | gzip | uuencode $BACKUP_FILENAME | mail -s "daily backup for `date`" webmaster#yourdomain.com
echo -e "\n====\n== Backed up APPNAME_production to $BACKUP_FILENAME on `date` \n====\n"
And output it to cron.log. This may require some tweaking on your end, but it works great once you get it. E-mails the backup to you once per day as a gzipped file, my database is fairly large and file is under 2000kb right now.
It's not the safest technique, so if you're really concerned that someone might get into your e-mail and get access to the backups (which should have sensitive information encrypted anyways), then you'll have to find another solution.
To restore:
gzip -d APPNAME_production_timestamp.gz
mysql -u USERNAME -pPASSWORD APPNAME_production < APPNAME_production_timestamp.sql
or something similar... I don't need to restore often so I don't know this one off the top of my head, but a quick google search should turn up something if this doesn't work.

PostgreSQL how to create a copy of a database or schema?

Is there a simple way to create a copy of a database or schema in PostgreSQL 8.1?
I'm testing some software which does a lot of updates to a particular schema within a database, and I'd like to make a copy of it so I can run some comparisons against the original.
If it's on the same server, you just use the CREATE DATABASE command with the TEMPLATE parameter. For example:
CREATE DATABASE newdb WITH TEMPLATE olddb;
pg_dump with the --schema-only option.
If you have to copy the schema from the local database to a remote database, you may use one of the following two options.
Option A
Copy the schema from the local database to a dump file.
pg_dump -U postgres -Cs database > dump_file
Copy the dump file from the local server to the remote server.
scp localuser#localhost:dump_file remoteuser#remotehost:dump_file
Connect to the remote server.
ssh remoteuser#remotehost
Copy the schema from the dump file to the remote database.
psql -U postgres database < dump_file
Option B
Copy the schema directly from the local database to the remote database without using an intermediate file.
pg_dump -h localhost -U postgres -Cs database | psql -h remotehost -U postgres database
This blog post might prove helpful for you if you want to learn more about options for copying the database using pg_dump.
This can be done by running the following command:
CREATE DATABASE [Database to create] WITH TEMPLATE [Database to copy] OWNER [Your username];
Once filled in with your database names and your username, this will create a copy of the specified database. This will work as long as there are no other active connections to the database you wish to copy. If there are other active connections you can temporarily terminate the connections by using this command first:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = '[Database to copy]'
AND pid <> pg_backend_pid();
A good article that I wrote for Chartio's Data School which goes a bit more in depth on how to do this can be found here:
https://dataschool.com/learn/how-to-create-a-copy-of-a-database-in-postgresql-using-psql