The most efficient way to move psql databases - sql

What is the most efficient, secure way to pipe the contents of a postgresSQL database into a compressed tarfile, then copy to another machine?
This would be used for localhosting development, or backing up to a remote server, using *nix based machines at both ends.

This page has a complete backup script for a webserver, including the pg_dump output.
Here is the syntax it uses:
BACKUP="/backup/$NOW"
PFILE="$(hostname).$(date +'%T').pg.sql.gz"
PGSQLUSER="vivek"
PGDUMP="/usr/bin/pg_dump"
$PGDUMP -x -D -U${PGSQLUSER} | $GZIP -c > ${BACKUP}/${PFILE}
After you have gzipped it, you can transfer it to the other server with scp, rsync or nfs depending on your network and services.

pg_dump is indeed the proper solution. Be sure to read the man page. In Espo's example, some options are questionable (-x and -D) and may not suit you.
As with every other database manipulation, test a lot!

Related

How to backup/restore a Firebird database?

I am really confused here about Firebird v2.5 backup/restore process. What should I use to backup/restore a local Firebird database:
fbsvcmgr.exe, gbak.exe, isql.exe or nbackup.exe
Are these all options or I am wrong about something!
What is the practical way to do it for a C++ application?
How should I know if a database already exists the first time, so I can decide whether to restore it or not.
I usually use gbak (don't know about the others).
Backup
gbak -b -v -user SYSDBA -password "masterkey" D:\database.FDB E:\database.fbk
Restore
gbak -c -user SYSDBA -password masterkey E:\database.fbk E:\database_restore.fdb
If file exists for restore you could do it with gbak restore flags
-c = create new file
-r = replace file
Here is good page for FB backup/restore: http://www.destructor.de/firebird/gbak.htm
There are two primary ways to create backups in Firebird:
gbak, which creates a logical backup of the database (object 'descriptions' (e.g. table structure, views, etc) and data)
nbackup (also known as nbak), which creates a physical backup of the database (physical data pages changed since the previous nbackup)
In most cases, I'd suggest to use gbak, because it is simpler and also allows you to move backups between platforms and Firebird versions, while nbackup is only really suitable for the same platform and Firebird version (but has the advantage of allowing incremental backups).
ISQL is an interactive query CLI, and cannot be used to create backups. Fbsvcmgr is the "Firebird Service Manager" tool, which can be used to invoke service operations on a (remote) Firebird server. This includes backup and restore operations through gbak and nbackup. Fbsvcmgr is pretty low-level and hard to use (see fbsvcmgr -? for options).
For gbak, you'd normally invoke the services through the gbak executable (option -se[rvice] <service>), see also Remote Backups & Restores in the gbak documentation. For nbackup you either can use the nbackup tool locally, or you need to use the fbsvcmgr (or another tool that supports service operations) to invoke the same functionality remotely (action_nbak and action_nrest), see also Backups on remote machines (Firebird 2.5+) in the nbackup documentation.
For detailed description on gbak, see gbak - Firebird Backup & Restore Utility. For nbackup, see Firebird's nbackup tool.
With a gbak backup, you'd normally restore the database using 'create' (option -c[reate]) or 'recreate' (-r[ecreate] without o[verwrite] option), which will fail if the database file already exists. See also the gbak manual linked above for more information.
I won't really answer your question about how to do it from a C++ application, because I don't program C++, and your question is already too broad as it is. But know that it is possible to invoke Firebird service operations, including backup and restore using both gbak and nbackup, from C++ code (that is essentially what Firebird tools itself do). If you want to know more about that, I'd suggest you ask on the firebird-support Google Group.

How to migrate MySQL databases to my local machine

I have two exactly same mysql databases with different data running on Amazon AWS. I would like to move those databases to my local machine. They are not too big databases less than 1GB. I read about mysqldump but it is too complicated and could not find easy follow on instructions.
First, tried using MySQL workbench migration tool and cant connect to the source.
Second, I tried connecting to the databases from the workbench but failed.
Third, I tried to move table by table, but when I export it to .csv file and try to open it table formation is lost.
How can I move combine those databases and move to my local computer efficiently?
go to your ssh shell (terminal)
mysqldump -u root -p --all-databases > exported.sql
now move the dump to the target system (your local computer) and do
mysql -u root -p < exported.sql
do this for each db-source and your done
PS: replace root if needed for DB admin username
UPDATE:
You can do this on the fly from source to destination in one line:
mysqldump -h source_hostname_or_ip -u root --password='password' --extended-insert --databases DatabaseName | mysql -u root --password='password' --host=destination_host -C DatabaseName
Why are you not able to connect using Workbench? Fill in your SSH IP(port(22) not needed), Select the SSH Key file(In text format & not ppk), Fill in your RDS instance End Point and credentials.
Then TEST CONNECTION...
If successful, you can use EXPORT option, select your DB and proceed!

Restore Redis dump to a different database

How can I dump a redis that's running on database 0 and restore it in my local machine on a different database (8) ?
I already secure copied the dump file:
scp hostname#/var/lib/redis/dump.rdb .
But if I change my local redis dump.rdb with this one, I'll get the data on database 0. How can I restore it to a specific database?
Firstly note that the use of numbered/shared Redis databases is inadvisable. You really should consider using dedicated Redis servers with a single DB (0) on them (more info at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances)
Redis does not offer a straightforward way to do this, but there are two basic ways one could go about it:
Pre-processing: modify the dump.rdb file to load into your database of choosing. You could build a tool for that or perhaps use one of the existing ones. Jan-Erik has done an outstanding job of documenting the RDB v7 format at http://rdb.fnordig.de/file_format.html so all you need to do is basically change the Database Selector byte.
Post-restore: use the MOVE command on the output of SCANing your restored database - should be easily scriptable.
I ended up creating a script in Ruby to dump and restore the keys I wanted. (Please note that this approach is slow, takes around 1 min for 200 keys) .
Get the keys to dump / restore
ssh hostname redis-cli --scan --pattern 'awesome_filter_pattern*'
Open an ssh connection to the production server
Dump the remote key
dump = ssh.exec!("redis-cli dump #{key}").chomp
Restore it on localhost
$redis.connection.restore(key, 0, dump)

Firebird remote backup

I want to backup a firebird database.
I am using gbak.exe utility. It works fine.
But, when i want to do a backup from a remote computer, the backup file is stored on the serveur file system.
Is there a way to force gbak utility to download backup file ?
Thanks
Backup is stored on the Firebird Server
gbak -b -service remote_fb_ip:service_mgr absolute_path_to_db_file absolute_path_to_backupfile -user SYSDBA -pass masterkey
Backup is stored on the local machine
gbak -b remote_fb_ip:absolute_path_to_db_file path_to_local_file -user SYSDBA -pass masterkey
see:
remote server local backup
and
gbak documentation
It is always a problem to grab a remote database onto a different remote computer. For this purposes, our institute uses Handy Backup (for Firebird-based apps, too), but if you are preferring GBAK, these are some more ways to do it.
A simplest method is to call directly to a remote database from a local machine using GBAK (I see it was already described before me). Another method is an installation of GBAK to a remote machine using administrative instruments for Windows networks. This method can be tricky, as in mixed-architecture networks (with domain and non-domain sections) some obstacles are always existed.
Therefore, the simplest method is writing a backup script (batch) file calling GBAK and then copying the resulted Firebird file to some different network destination, using a command-line network file manager or FTP manager like FileZilla. It require some (minimal) skill and research, but can work for many times after a successful testing.
Best regards!
If you have gbak locally, you can back up over a network. Simply specify the host name before the database.
For example:
gbak -B 192.168.0.10:mydatabase mylocalfile.fbk -user SYSDBA -password masterkey
Try this command:
"C:\Program Files (x86)\Firebird\Firebird_2_5\bin\gbak" -v -t -user SYSDBA -password "masterkey" 192.168.201.10:/database/MyDatabase.fdb E:\Backup\BackupDatabase.fbk
Of course you need to update your paths accordingly :)
I believe you should be able to do this if you use the service manager for the backup, and specify stdout as the backup file. In that case the file should be streamed to the gbak client and you can write it to disk with a redirect.
gbak -backup -service hostname:service_mgr employee stdout > backupfile.fbk
However I am not 100% sure if this actually works, as the gbak documentation doesn't mention this. I will check this and amend my answer later this week.

How to transfer data from one database to another database in sql 2005

Here i want to transfer data from one database to another database in sql 2005,
i tried in dts but its not working.
Need more information, but if you want to just copy a database, you can back it up, then restore that backup in another database. If you just want to copy individual tables then DTS is your friend. How is it "not working" for you?
select *
into SecondDatabase.dbo.TableName
from FirstDatabase.dbo.TableName
If you want something else, you have to be more specific.
If you're moving a few tables once off then the simplest way is to use the BCP command line utility.
bcp db_name.schema_name.table_name out table_name.dat -c -t, -S source_server -T
bcp db_name.schema_name.table_name in table_name.dat -c -t, -S destination_server -T
Change '-T' to '-U your_username -P your_password' if you're not using trusted connections.
If you're moving data regularly between servers on a LAN then consider using linked servers. http://msdn.microsoft.com/en-us/library/ff772782.aspx
Link server performance over WANs is often poor, in my experience. Consider doing a BCP out, secure file transfer to the destination server then BCP in if the servers aren't on the same LAN.