I have successfully connected using ssh and inputted the right credentials. Where can I find the backup sql file? Thanks in advance
Connected to the remote server, take the dump of the database using following command
mysqldump -R -h root -u username -ppassword databasename > /home/krishna/databasename.sql;
Then you can able to find your database in the /home/krishna/ folder.
Run pwd on the remote machine to see where mysqldump file resides. You can transfer it to your personal computer using scp as,
scp $PWD/dumpfile localuser#localhostip:/home/localuser
This command will prompt for local pc password, enter it. And the file will be copied to your home folder on local machine.
I can see you have logged in report server through SSH so you will get your MySQLdump file in your SSH user home directory. If you want to download that file on your local pc. Login through FTP with same SSH user details and download it
Thanks you for the answers! I consolidated all of it and came up with my own. I used mysqldump with the command line that you guys suggested and made a back up. Then I used ftp to gain access to the server's folders. That's where I downloaded the file. Again Thank you all so much
Related
I have two exactly same mysql databases with different data running on Amazon AWS. I would like to move those databases to my local machine. They are not too big databases less than 1GB. I read about mysqldump but it is too complicated and could not find easy follow on instructions.
First, tried using MySQL workbench migration tool and cant connect to the source.
Second, I tried connecting to the databases from the workbench but failed.
Third, I tried to move table by table, but when I export it to .csv file and try to open it table formation is lost.
How can I move combine those databases and move to my local computer efficiently?
go to your ssh shell (terminal)
mysqldump -u root -p --all-databases > exported.sql
now move the dump to the target system (your local computer) and do
mysql -u root -p < exported.sql
do this for each db-source and your done
PS: replace root if needed for DB admin username
UPDATE:
You can do this on the fly from source to destination in one line:
mysqldump -h source_hostname_or_ip -u root --password='password' --extended-insert --databases DatabaseName | mysql -u root --password='password' --host=destination_host -C DatabaseName
Why are you not able to connect using Workbench? Fill in your SSH IP(port(22) not needed), Select the SSH Key file(In text format & not ppk), Fill in your RDS instance End Point and credentials.
Then TEST CONNECTION...
If successful, you can use EXPORT option, select your DB and proceed!
I want to backup a firebird database.
I am using gbak.exe utility. It works fine.
But, when i want to do a backup from a remote computer, the backup file is stored on the serveur file system.
Is there a way to force gbak utility to download backup file ?
Thanks
Backup is stored on the Firebird Server
gbak -b -service remote_fb_ip:service_mgr absolute_path_to_db_file absolute_path_to_backupfile -user SYSDBA -pass masterkey
Backup is stored on the local machine
gbak -b remote_fb_ip:absolute_path_to_db_file path_to_local_file -user SYSDBA -pass masterkey
see:
remote server local backup
and
gbak documentation
It is always a problem to grab a remote database onto a different remote computer. For this purposes, our institute uses Handy Backup (for Firebird-based apps, too), but if you are preferring GBAK, these are some more ways to do it.
A simplest method is to call directly to a remote database from a local machine using GBAK (I see it was already described before me). Another method is an installation of GBAK to a remote machine using administrative instruments for Windows networks. This method can be tricky, as in mixed-architecture networks (with domain and non-domain sections) some obstacles are always existed.
Therefore, the simplest method is writing a backup script (batch) file calling GBAK and then copying the resulted Firebird file to some different network destination, using a command-line network file manager or FTP manager like FileZilla. It require some (minimal) skill and research, but can work for many times after a successful testing.
Best regards!
If you have gbak locally, you can back up over a network. Simply specify the host name before the database.
For example:
gbak -B 192.168.0.10:mydatabase mylocalfile.fbk -user SYSDBA -password masterkey
Try this command:
"C:\Program Files (x86)\Firebird\Firebird_2_5\bin\gbak" -v -t -user SYSDBA -password "masterkey" 192.168.201.10:/database/MyDatabase.fdb E:\Backup\BackupDatabase.fbk
Of course you need to update your paths accordingly :)
I believe you should be able to do this if you use the service manager for the backup, and specify stdout as the backup file. In that case the file should be streamed to the gbak client and you can write it to disk with a redirect.
gbak -backup -service hostname:service_mgr employee stdout > backupfile.fbk
However I am not 100% sure if this actually works, as the gbak documentation doesn't mention this. I will check this and amend my answer later this week.
I want to copy files from a source to a target unattended via bash script. I do have options to use sftp, ftp over SSL, rsync, WebDAV, CIFS
I do not have the option to install a SSH key pair on the target side (Strato HiDrive), so scp and sftp won't work, do they?
I have read about a scp -W option to store the Password in a file, but can't find some detailed information about…
any ideas?
I think you have two questions here.
Q1 is how should you keep a copy of your files on a remote server. The answer to that is rsync over ssh.
Q2. how to supply a password to ssh when you can't put your key on the remote server. This is answered here:
how to pass password for rsync ssh command
Hope that helps.
I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal:
sudo rsync -chavzP --stats USERNAME#IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP
I then prompt my password for that user and bob's my uncle.
This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday?
EDIT
I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.
this command worked for me. Combine it with a cronjob
rsync -avz username#ipaddress:/path/to/backup /path/to/save
I have a folder with 30000 files. I want to copy 1000 files a time via SSH to another folder. I need to do that cause my script is timing out when I try to run it on all 30k.
Is that possible?
EDIT
Based on the comments.
I connect via putty. The script is executed from the user by clicking a button and it not the problem. I just want to move the files in batches and I don't want to do it via ftp.
Like the LIMIT command in SQL (LIMIT 0,1000 or LIMIT 1000,2000)
The best way to copy over ssh is by using scp (pscp in putty)
pscp.exe -r somedir me#server:/data/vol1
pscp.exe uses all settings from putty including authentication keys.