Firebird remote backup - backup

I want to backup a firebird database.
I am using gbak.exe utility. It works fine.
But, when i want to do a backup from a remote computer, the backup file is stored on the serveur file system.
Is there a way to force gbak utility to download backup file ?
Thanks

Backup is stored on the Firebird Server
gbak -b -service remote_fb_ip:service_mgr absolute_path_to_db_file absolute_path_to_backupfile -user SYSDBA -pass masterkey
Backup is stored on the local machine
gbak -b remote_fb_ip:absolute_path_to_db_file path_to_local_file -user SYSDBA -pass masterkey
see:
remote server local backup
and
gbak documentation

It is always a problem to grab a remote database onto a different remote computer. For this purposes, our institute uses Handy Backup (for Firebird-based apps, too), but if you are preferring GBAK, these are some more ways to do it.
A simplest method is to call directly to a remote database from a local machine using GBAK (I see it was already described before me). Another method is an installation of GBAK to a remote machine using administrative instruments for Windows networks. This method can be tricky, as in mixed-architecture networks (with domain and non-domain sections) some obstacles are always existed.
Therefore, the simplest method is writing a backup script (batch) file calling GBAK and then copying the resulted Firebird file to some different network destination, using a command-line network file manager or FTP manager like FileZilla. It require some (minimal) skill and research, but can work for many times after a successful testing.
Best regards!

If you have gbak locally, you can back up over a network. Simply specify the host name before the database.
For example:
gbak -B 192.168.0.10:mydatabase mylocalfile.fbk -user SYSDBA -password masterkey

Try this command:
"C:\Program Files (x86)\Firebird\Firebird_2_5\bin\gbak" -v -t -user SYSDBA -password "masterkey" 192.168.201.10:/database/MyDatabase.fdb E:\Backup\BackupDatabase.fbk
Of course you need to update your paths accordingly :)

I believe you should be able to do this if you use the service manager for the backup, and specify stdout as the backup file. In that case the file should be streamed to the gbak client and you can write it to disk with a redirect.
gbak -backup -service hostname:service_mgr employee stdout > backupfile.fbk
However I am not 100% sure if this actually works, as the gbak documentation doesn't mention this. I will check this and amend my answer later this week.

Related

How to backup/restore a Firebird database?

I am really confused here about Firebird v2.5 backup/restore process. What should I use to backup/restore a local Firebird database:
fbsvcmgr.exe, gbak.exe, isql.exe or nbackup.exe
Are these all options or I am wrong about something!
What is the practical way to do it for a C++ application?
How should I know if a database already exists the first time, so I can decide whether to restore it or not.
I usually use gbak (don't know about the others).
Backup
gbak -b -v -user SYSDBA -password "masterkey" D:\database.FDB E:\database.fbk
Restore
gbak -c -user SYSDBA -password masterkey E:\database.fbk E:\database_restore.fdb
If file exists for restore you could do it with gbak restore flags
-c = create new file
-r = replace file
Here is good page for FB backup/restore: http://www.destructor.de/firebird/gbak.htm
There are two primary ways to create backups in Firebird:
gbak, which creates a logical backup of the database (object 'descriptions' (e.g. table structure, views, etc) and data)
nbackup (also known as nbak), which creates a physical backup of the database (physical data pages changed since the previous nbackup)
In most cases, I'd suggest to use gbak, because it is simpler and also allows you to move backups between platforms and Firebird versions, while nbackup is only really suitable for the same platform and Firebird version (but has the advantage of allowing incremental backups).
ISQL is an interactive query CLI, and cannot be used to create backups. Fbsvcmgr is the "Firebird Service Manager" tool, which can be used to invoke service operations on a (remote) Firebird server. This includes backup and restore operations through gbak and nbackup. Fbsvcmgr is pretty low-level and hard to use (see fbsvcmgr -? for options).
For gbak, you'd normally invoke the services through the gbak executable (option -se[rvice] <service>), see also Remote Backups & Restores in the gbak documentation. For nbackup you either can use the nbackup tool locally, or you need to use the fbsvcmgr (or another tool that supports service operations) to invoke the same functionality remotely (action_nbak and action_nrest), see also Backups on remote machines (Firebird 2.5+) in the nbackup documentation.
For detailed description on gbak, see gbak - Firebird Backup & Restore Utility. For nbackup, see Firebird's nbackup tool.
With a gbak backup, you'd normally restore the database using 'create' (option -c[reate]) or 'recreate' (-r[ecreate] without o[verwrite] option), which will fail if the database file already exists. See also the gbak manual linked above for more information.
I won't really answer your question about how to do it from a C++ application, because I don't program C++, and your question is already too broad as it is. But know that it is possible to invoke Firebird service operations, including backup and restore using both gbak and nbackup, from C++ code (that is essentially what Firebird tools itself do). If you want to know more about that, I'd suggest you ask on the firebird-support Google Group.

How to migrate MySQL databases to my local machine

I have two exactly same mysql databases with different data running on Amazon AWS. I would like to move those databases to my local machine. They are not too big databases less than 1GB. I read about mysqldump but it is too complicated and could not find easy follow on instructions.
First, tried using MySQL workbench migration tool and cant connect to the source.
Second, I tried connecting to the databases from the workbench but failed.
Third, I tried to move table by table, but when I export it to .csv file and try to open it table formation is lost.
How can I move combine those databases and move to my local computer efficiently?
go to your ssh shell (terminal)
mysqldump -u root -p --all-databases > exported.sql
now move the dump to the target system (your local computer) and do
mysql -u root -p < exported.sql
do this for each db-source and your done
PS: replace root if needed for DB admin username
UPDATE:
You can do this on the fly from source to destination in one line:
mysqldump -h source_hostname_or_ip -u root --password='password' --extended-insert --databases DatabaseName | mysql -u root --password='password' --host=destination_host -C DatabaseName
Why are you not able to connect using Workbench? Fill in your SSH IP(port(22) not needed), Select the SSH Key file(In text format & not ppk), Fill in your RDS instance End Point and credentials.
Then TEST CONNECTION...
If successful, you can use EXPORT option, select your DB and proceed!

Where can I find the sql file after mysqldump

I have successfully connected using ssh and inputted the right credentials. Where can I find the backup sql file? Thanks in advance
Connected to the remote server, take the dump of the database using following command
mysqldump -R -h root -u username -ppassword databasename > /home/krishna/databasename.sql;
Then you can able to find your database in the /home/krishna/ folder.
Run pwd on the remote machine to see where mysqldump file resides. You can transfer it to your personal computer using scp as,
scp $PWD/dumpfile localuser#localhostip:/home/localuser
This command will prompt for local pc password, enter it. And the file will be copied to your home folder on local machine.
I can see you have logged in report server through SSH so you will get your MySQLdump file in your SSH user home directory. If you want to download that file on your local pc. Login through FTP with same SSH user details and download it
Thanks you for the answers! I consolidated all of it and came up with my own. I used mysqldump with the command line that you guys suggested and made a back up. Then I used ftp to gain access to the server's folders. That's where I downloaded the file. Again Thank you all so much

rsync remote to local automatic backup

I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal:
sudo rsync -chavzP --stats USERNAME#IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP
I then prompt my password for that user and bob's my uncle.
This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday?
EDIT
I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.
this command worked for me. Combine it with a cronjob
rsync -avz username#ipaddress:/path/to/backup /path/to/save

The most efficient way to move psql databases

What is the most efficient, secure way to pipe the contents of a postgresSQL database into a compressed tarfile, then copy to another machine?
This would be used for localhosting development, or backing up to a remote server, using *nix based machines at both ends.
This page has a complete backup script for a webserver, including the pg_dump output.
Here is the syntax it uses:
BACKUP="/backup/$NOW"
PFILE="$(hostname).$(date +'%T').pg.sql.gz"
PGSQLUSER="vivek"
PGDUMP="/usr/bin/pg_dump"
$PGDUMP -x -D -U${PGSQLUSER} | $GZIP -c > ${BACKUP}/${PFILE}
After you have gzipped it, you can transfer it to the other server with scp, rsync or nfs depending on your network and services.
pg_dump is indeed the proper solution. Be sure to read the man page. In Espo's example, some options are questionable (-x and -D) and may not suit you.
As with every other database manipulation, test a lot!