MongoDB backup plan - backup

I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me.
I need to have 3 backup plans:
Hourly backup plan. Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it?
Daily backup plan. Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync, master-slave or something else? I would like to have minimal traffic so ideally only changes will be sent.
Weekly backup plan. Data is synced to second backup disk so if both server and first backup disk explode I have at least data for last week. Here this is the question of reliability so it's ok to send all data via network.
How can I do it?

The fsync command flushes the data to disk. It is executed each 60 seconds by default, but can be configured using the --syncdelay command line parameter.
The documentation on backups has some good pointers for daily and weekly backups. For the daily backup, a master-slave configuration seems like the best option, as it will only sync changes.
For the weekly backup you can also use a master-slave configuration, or replication. Another option is the mongodump utility, which will back-up the entire database. It is capable of creating backups while the database is running, so you can run it on the main database or one of the slaves. You can also lock the slave before backing it up.

Try this backup script if you want to create a backup from slave mongodb database to S3.
DB host (secondary preferred as to avoid impacting primary performance)
HOST='SomeHost/mongodbtest-slave'
DB name
DBNAME=***
S3 bucket name
BUCKET=*-backup
Linux user account
USER=ubuntu
Current time
TIME=/bin/date +%d-%m-%Y-%T
Password
PASSWORD=somePassword#!2*1
Username
USERNAME=someUsername
Backup directory
DEST=/home/ubuntu/tmp
Tar file of backup directory
TAR=$DEST/../$TIME.tar
Create backup dir (-p to avoid warning if already exists)
/bin/mkdir -p $DEST
Log
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
Dump from mongodb host into backup directory
mongodump --port 27017 -d DBNAME -u USERNAME -p PASSWORD -o $DEST
Create tar of backup directory
/bin/tar cvf $TAR -C $DEST .
Upload tar to s3
/usr/bin/aws s3 cp $TAR s3://$BUCKET/
Remove tar file locally
/bin/rm -f $TAR
Remove backup directory
/bin/rm -rf $DEST
All done
echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar
You can use the steps above put them in a shell executable file and execute this at any interval using crontab commands.

If you want to outsource the backup solution entirely, MongoDB Management Service takes snapshots every six hours. The default retention policy on the snapshots will allow you to get point-in-time restore for 24 hours, daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a year.
This FAQ has the full retention policy.
The backup service continually backs up your replica set by reading the oplog so the overhead is lower than full local periodic snapshots.

May be you can use automongobackup .

On the first point.
MongoDB has a term like 'durable write operation'. If journaling is enabled, you may only lose data that has not been writed in the log. This is a very small amount of time (100 milliseconds by default)
On the second and third points.
You can set master slave replication, but this will not protect you from data errors (for example, if important data is accidentally deleted). Therefore, you need to provide regular backups one way or another.
There are several approaches here:
LVM snapshot is a good solution if your filesystem supports it and the journaling is enabled in your mongoDB. For more details see here
mongodump -You can create a shell script that will run scheduled backups via cron and send them to storage. Here is an example of a good script
Backup as a Service. There are many solutions to make the backup for you.

Related

Barman full Backup is not getting triggered

Barman full Backup is not getting triggered. Installed Barman 2.3-2 in Ubantu 18.04 . Barman is taking incremental backup perfectly but it is not taking full backup.
Barman backup command show below output ,
Backup start at LSN:
This is the first backup for server h8
WAL segments preceding the current backup have been found:
from server h8 has been removed
Starting backup copy via rsync/SSH for 20220613T132933 (5 jobs)
When you install barman, you get a cron job in /etc/cron.d/barman that runs barman cron every minute.
That command, however, does not take any backups.
So, it's up to you to add the barman backup server_name --wait command in a cron of your choice (you can actually use the same /etc/cron.d/barman file if you so desire).
That has been discussion that barman cron should take backups according to the retention policy defined, but it actually doesn't, it just deletes existing backups according to that policy, but doesn't take them. See here.

Backing up redis at a specific time of day

Is there any way to schedule redis back-ups at a specific time of day (e.g. 3:00 AM GMT) - preferably via a setting in the accompanying conf file?
I already understand that one can set a backup rule in redis configuration (e.g. save every X hours if Y keys have changed).
But how does one schedule the said backup at a particular time of day? Would love to know something basic, but effective. In case it matters, my redis version is 5.0.3
So far I know it is currently not possible from inside redis. But its achievable using crontab. Here is a short example:
create a backup script file:
/tmp/backup.sh
echo save | redis-cli >> /tmp/redis-backup.log
If using sockets, the above would be:
echo save | redis-cli -s /var/run/redis.sock >> /tmp/redis-backup.log
The socket location in your system may vary.
Next, give execute permission to the script:
chmod +x /tmp/backup.sh
Finally, make an entry in crontab: crontab -e
0 3 * * * /tmp/backup.sh
This will run backup.sh in exactly 3AM.
In case you want to disable redis saving setup in the conf (without restarting the redis instance), the best way is to log into redis-cli and issue CONFIG SET save "". Double check that it worked via CONFIG GET save. Finally, don't forget to change the save settings in the relevant conf file as well. Lastly, it's wiser to use bgsave instead of save if tackling a redis instance in production.
For more, checkout these links:
How To Back Up and Restore Your Redis Data
Cron Scheduler
How To Start/Stop/Restart Cron Service In Linux

Nexus 3 backup via command line?

In Nexus 3 backup procedure has changed.
In Nexus 2 recommended was to run a OS scheduled task / cron job to rsync some directories to a backup location.
In Nexus 3 the recommended way seems to be to create to schedule a predefined Nexus Task Export configuration & metadata for backup Task. And then also create a cron job to backup what gets exported with this task.
Is it still possible in Nexus 3 to do a old style backup? Shutdown the server and backup certain directories? And then for restore just put everything back? Will that work?
Or use a command line to run this task?
The way this is done in Nexus 3 does not seem to be thought through very well. You need to do a lot more to do what could be done with a single cron job in Nexus 2:
Create a scheduled task to export data.
Create a cron job to backup exported data.
Make sure that scheduled task runs and finished before the cron job.
See for example https://help.sonatype.com/display/NXRM3/Restore+Exported+Databases
See also Nexus Repository 3 backup
If you back up the entire data (sonatype-work) directory this should work as you wish. However, since the data directory is large and has many moving parts, it is safer to use the task, otherwise you may get copies of things in motion which could then corrupt and your backup would not work. The copy of the work directory as far as I know is only recommended for servers that are down, which isn't an option for many bigger companies.
Copying the entire folder did not work for me and resulted in orientdb problems. Last year I started to create N3DR. Version 3.5.0 has just been released.
https://help.sonatype.com/plugins/servlet/mobile?contentId=5412146#content/view/5412146
In case link becomes bad etc. (From Oct 20, 2017)
Nexus Repository stores data in blob stores and keeps some metadata and configuration information separately in databases. You must back up the blob stores and metadata databases together. Your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Complete the steps below to perform a backup:
Blob Store Backup
You must back up the filesystem or object store containing the blobs separately from Nexus Repository.
For File blob stores, back up the directory storing the blobs.
For a typical configuration, this will be $data-dir/blobs.
For S3 blob stores, you can use bucket versioning as an alternative to backups. You can also mirror the bucket to another S3 bucket instead.
For cloud-based storage providers (S3, Azure, etc.), refer to their documentation about storage backup options.
Node ID Backup
Each Nexus Repository instance is associated with a distinct ID. You must back up this ID so that blob storage metrics (the size and count of blobs on disk) and Nexus Firewall reports will function in the event of a restore / moving Nexus Repository from one server to another. The files to back up to preserve the node ID are located in the following location (also see Directories):
$data-dir/keystores/node/​
To use this backup, place these files in the same location before starting Nexus Repository.
Database Backup
The databases that you export have pointers to blob stores that contain components and assets potentially across multiple repositories. If you don’t back them up together, the component metadata can point to non-existent blob stores. So, your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Here’s a common scenario for backing up custom configurations in tandem with the database export task:
Configure the appropriate backup task to export databases:
Use the Admin - Export databases for backup task for OrientDB databases
Use the Admin - Backup H2 Database task for H2 databases PRO
Run the task to export the databases to the configured folder.
Back up custom configurations in your installation and data directories at the same time you run the export task.
Back up all blob stores.
Store all backed up configurations and exported data together.
Write access to databases is temporarily suspended until a backup is complete. It’s advised to schedule backup tasks during off-hours.

Undo zfs create

I have a problem. I created a pool consisting of single volume of 1 file 2.5Tb just to fight with file duplicates. I copied a folder with photos. Some of the photos were not backed up. Just now I see my pool folder is empty. When I checked with 'sudo zfs list' it said 'No datasets available'.
I thought it was detached and to attach I started again all these commands.
sudo zpool create singlepool -f /home/john/zfsvolumes/zfs_single_volume.dat -m /home/share/zfssinglepool
sudo zfs set dedup=on singlepool
sudo zpool get dedupratio singlepool
sudo zfs set compression=lz4 singlepool
sudo chown -R writer:writer /home/share/zfssinglepool
I see now empty pool!
May I get my folders back which I copied to the pool before I started create pool again?
Unfortunately, use of zpool create -f will recreate the pool from scratch even if ZFS recognizes that a pool has already been created using that storage:
-f Forces use of vdevs, even if they appear in use or specify a
conflicting replication level. Not all devices can be over-
ridden in this manner.
This is similar to reformatting a partition with other file systems, which will leave whatever data is there written in place, but still erase the references the file system needs to find the data. You may be able to pay an expert to reconstruct your data, but otherwise I'm afraid the data will be very hard to get back from your pool. As in any data recovery mission, I'd advise making a copy of the data ASAP on some external media that you can use to do the recovery from, in case further attempts at recovery accidentally corrupt the data even worse.

Restore Redis dump to a different database

How can I dump a redis that's running on database 0 and restore it in my local machine on a different database (8) ?
I already secure copied the dump file:
scp hostname#/var/lib/redis/dump.rdb .
But if I change my local redis dump.rdb with this one, I'll get the data on database 0. How can I restore it to a specific database?
Firstly note that the use of numbered/shared Redis databases is inadvisable. You really should consider using dedicated Redis servers with a single DB (0) on them (more info at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances)
Redis does not offer a straightforward way to do this, but there are two basic ways one could go about it:
Pre-processing: modify the dump.rdb file to load into your database of choosing. You could build a tool for that or perhaps use one of the existing ones. Jan-Erik has done an outstanding job of documenting the RDB v7 format at http://rdb.fnordig.de/file_format.html so all you need to do is basically change the Database Selector byte.
Post-restore: use the MOVE command on the output of SCANing your restored database - should be easily scriptable.
I ended up creating a script in Ruby to dump and restore the keys I wanted. (Please note that this approach is slow, takes around 1 min for 200 keys) .
Get the keys to dump / restore
ssh hostname redis-cli --scan --pattern 'awesome_filter_pattern*'
Open an ssh connection to the production server
Dump the remote key
dump = ssh.exec!("redis-cli dump #{key}").chomp
Restore it on localhost
$redis.connection.restore(key, 0, dump)