Bacula/Bareos disaster recover from scratch using bextract - backup

On Bacula/Bareos, document stress the importance of Catalog bootstrap file must be save on somewhere safe, I know Catalog consist of MySQL DB dump and optional included Bacula/bareos config file, but how exactly does anyone recover from scratch in case the whole backup infrastructure is gone?
Is it just install all Bacula/bareos software, then import MySQL and config then fire up Director would do the trick?

A bit of an old question, but I'll provide some feed back,
If you've done a mysqldump of the database (or pgdump depending on the backend) you essentially have the catalog in it's full state. I believe that you can simply restore this database to a new server, and restore the old config files (these are not stored in the dump but rather in /etc/bareos). Also, make sure that the same user/password is used for the database user as specified in the bareos-dir.conf file, or else you will not be able to connect to the database. Depending on how your storage devices are setup you may need to mess around with the baroes-sd.conf file.
To answer the other question off the OP, you can use a volume without a catalog. It's a bit cumbersome, but is possible with the following:
http://www.bacula.org/5.0.x-manuals/en/utility/utility/Volume_Utility_Tools.html
For example:
List jobs on a volume: bls -j -V Full_1-1886 FileStorage1
List files on a volume: bls -V Full_1-1886 FileStorage1
Once you have found the file, or directory (Note wildcard characters are supported) you can extract the file:
bextract -i restoreFiles -V Full_2-1277 FileStorage2 /tmp/
Where:
restoreFiles specifies a file separated with newlines that lists files/directories to restore
/tmp/ is the destination of the restore

Related

PostgreSQL Query To Create A Directory

Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.

How do I copy a file into a docker-cloud container? (AKA How to copy a file over ssh without using scp)

docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh

Restoring Firebird 2.5 with fbsvcmgr

I'm configuring live backup and restore scripts to have "replicated" firebird dbs on main and reserve servers.
Backup doing fine:
"C:\Program Files\Firebird\Firebird_2_5\bin\nbackup" -B 0 "D:\testdb\LABORATORY_DB.FDB" D:\testdb\lab_FULL.fbk -user SYSDBA -pass masterkey -D OFF
Copying file to the remote server as well:
net use R: \\fbserv2\reserve
xcopy /Y D:\testdb\lab_FULL.fbk R:\
But restoring on remote side
"C:\Program Files\Firebird\Firebird_2_5\bin\fbsvcmgr.exe" fbserv2:service_mgr -user SYSDBA -password masterkey -action_nrest -dbname d:\reservedb\LABORATORY_DB.FDB -nbk_file d:\reserve\lab_FULL.fbk
caused an error:
Error (80) creating database file: d:\reservedb\LABORATORY_DB.FDB via copying from: d:\reserve\lab_FULL.fbk
The only way to restore database is to manually delete an old d:\reservedb\LABORATORY_DB.FDB before restoring. GBAK has the option to overwrite restorig db file, while fbsvcmgr seems to be not. Is there any other option? Did I miss something?
You can't restore over an existing database using nbackup. You either need to
delete the old database first and then restore,
or restore under a different name, delete the old database, and rename the new database to its final name.
See also the nbackup documentation, chapter Making and restoring backups:
If the specified database file already exists, the restore fails and you get an error message.
As far as I know it was a design decision to not allow overwriting an existing database. Gbak indeed has that option, but only for historic reasons; if it were built today, it would likely not have that option.

Save database on external hard drive

I am creating some databases using PostgreSQL but I want to save them on an external hard drive due to lack of memory in my computer.
How can I do this?
You can store the database on another disk by specifying it as the data_directory setting. You need to specify this at startup and it will apply to all databases.
You can put it in postgresql.conf:
data_directory = '/volume/path/'
Or, specify it on the command line when you start PostgreSQL:
postgres -c data_directory='/volume/path/'
Reference: 18.2. File Locations
STEP 1: If postgresql is running, stop it:
sudo systemctl stop postgresql
STEP 2: Get the path to access your hard drive.
(if Linux) Find and mount your hard drive by:
# Retrieve your device's name with:
sudo fdisk -l
# Then mount your device
sudo mount /dev/DEVICE_NAME YOUR_HD_DIR_PATH
STEP 3: Copy the existing database directory to the new location (in your hard drive) with rsync.
sudo rsync -av /var/lib/postgresql YOUR_HD_DIR_PATH
Then rename the previous postgres main dir with .bak extension to prevent conflicts
sudo mv /var/lib/postgresql/11/main /var/lib/postgresql/11/main.bak
Note: my postgres version was 11. Replace in path with your version.
STEP 4: Edit postgres configuration file:
sudo nano /etc/postgresql/11/main/postgresql.conf
Change the data_directory line with:
data_directory = 'YOUR_HD_DIR_PATH/postgresql/11/main'
STEP 5: Restart Postgres & Check everything is working
sudo systemctl start postgresql
pg_lsclusters
Output should shows status as 'online'
Ver Cluster Port Status Owner Data directory Log file
11 main 5432 online postgres YOUR_HD_DIR_PATH/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
Finally your can access your PostgresSQL with:
sudo -u postgres psql
You can try following the walkthrough here. This worked well for me and is similar to #Antiez's answer.
Currently, I am trying to do the same and the only conflict that I'm having at the moment is that it seems there is an issue with PostgreSQL's incremental backup and point-in-time recovery proccesses. I think it has something to do with folder permissions. If I try uploading a ~30MB csv to the postgres db, it will crash and the server will not start again because files cannot be written to the pg_wal directory. The only file in that directory is 000000010000000000000001 and does not move on to 000000010000000000000002 etc. while writing to a new table.
My stackoverflow post looking for a solution to this issue can be found here.

rsync "failed to set times on "XYZ": No such files or directory (2)

I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.