cPanel - Does restoring from a cPanel backup delete other cPanel backups made at a future date than the one you’re restoring from? - backup

I plan on restoring from a cPanel backup using the cPanel Backup Wizard, but I would like to know if restoring from a backup in this way will delete any other cPanel backups that were made at a future date from the one I’m restoring from?
Specifically, I have backups generated on 10/1 and 10/5 (earlier today). If I restore from the backup on 10/1, will it delete the backup I created on 10/5?
Additional Details
I'm on a Bluehost VPS server running CentOS 6.7.

It's depend on the situation on which path you are trying to restore the backup. If you restore the 10/1 backup at the same path where your secnd copy 10/5 is present, then it will delete that same copy

Related

Barman full Backup is not getting triggered

Barman full Backup is not getting triggered. Installed Barman 2.3-2 in Ubantu 18.04 . Barman is taking incremental backup perfectly but it is not taking full backup.
Barman backup command show below output ,
Backup start at LSN:
This is the first backup for server h8
WAL segments preceding the current backup have been found:
from server h8 has been removed
Starting backup copy via rsync/SSH for 20220613T132933 (5 jobs)
When you install barman, you get a cron job in /etc/cron.d/barman that runs barman cron every minute.
That command, however, does not take any backups.
So, it's up to you to add the barman backup server_name --wait command in a cron of your choice (you can actually use the same /etc/cron.d/barman file if you so desire).
That has been discussion that barman cron should take backups according to the retention policy defined, but it actually doesn't, it just deletes existing backups according to that policy, but doesn't take them. See here.

Copy backup file from primary server to secondary server

I have a primary SQL Server where backup job is running every night and backup gets stored in local drive.I want to copy that backup and store that in my fail-over SQL Server.
Options I tried:
I created a backup job to save that backup to my fail-over server drive, but I get an error "access is denied for the \xxx.xx.xxx\D$\backup path"
Is there a script to copy the backup and save it in another server drive?
Please see the screenshot
I have seen some post relating to this topic, but there is no accepted answer.

Restore database from backup drupal 8

I accidentally deleted my drupal 8 files. But my database is not damaged right now. Is there anyway I can bring my site back to live.
What I did: I have my site old backup [only files]. Copied those backup to root folder replaced database credentials in settings.php. But no luck.
Any other way to bring back my site live from database.
If you have drush you try with
drush sql-cli < my_directory/my_drupal8_dump.sql
So, in you backup files just run this command, not need to update settings.php

WHM backup and transfer

I have a web site on the cpanel(whm), I want to backup whole site and transfer it to another stand-alone server of my own, is that feasible? how to do it?
If the new server doesn't have cPanel I don't think there is a method of recreating automatically the account, so just try to run /scripts/pkgacct user(if you have root on cPanel server) or just use cPanel -> Backups -> Download or generate a full website backup.
Copy archive to the new server and start to manually recreate account, databases, etc.
Archive contain all the informations you need like dnszones, mysql databases, homedir.tar(all the files from /home/user), etc.
Yes! its easy!
Login to WHM
On the left there is Search box type Backup and click on Backup Configuration (OR go to Backup >> Backup Configuration)
Change Backup Status to enable
Check the options as per your requirement & at the end of page find Additional Destinations there might be drop down OR check box for FTP server click on that
FTP details of another server where you want to store all of your backups.
Hope it helps!

MongoDB backup plan

I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me.
I need to have 3 backup plans:
Hourly backup plan. Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it?
Daily backup plan. Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync, master-slave or something else? I would like to have minimal traffic so ideally only changes will be sent.
Weekly backup plan. Data is synced to second backup disk so if both server and first backup disk explode I have at least data for last week. Here this is the question of reliability so it's ok to send all data via network.
How can I do it?
The fsync command flushes the data to disk. It is executed each 60 seconds by default, but can be configured using the --syncdelay command line parameter.
The documentation on backups has some good pointers for daily and weekly backups. For the daily backup, a master-slave configuration seems like the best option, as it will only sync changes.
For the weekly backup you can also use a master-slave configuration, or replication. Another option is the mongodump utility, which will back-up the entire database. It is capable of creating backups while the database is running, so you can run it on the main database or one of the slaves. You can also lock the slave before backing it up.
Try this backup script if you want to create a backup from slave mongodb database to S3.
DB host (secondary preferred as to avoid impacting primary performance)
HOST='SomeHost/mongodbtest-slave'
DB name
DBNAME=***
S3 bucket name
BUCKET=*-backup
Linux user account
USER=ubuntu
Current time
TIME=/bin/date +%d-%m-%Y-%T
Password
PASSWORD=somePassword#!2*1
Username
USERNAME=someUsername
Backup directory
DEST=/home/ubuntu/tmp
Tar file of backup directory
TAR=$DEST/../$TIME.tar
Create backup dir (-p to avoid warning if already exists)
/bin/mkdir -p $DEST
Log
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
Dump from mongodb host into backup directory
mongodump --port 27017 -d DBNAME -u USERNAME -p PASSWORD -o $DEST
Create tar of backup directory
/bin/tar cvf $TAR -C $DEST .
Upload tar to s3
/usr/bin/aws s3 cp $TAR s3://$BUCKET/
Remove tar file locally
/bin/rm -f $TAR
Remove backup directory
/bin/rm -rf $DEST
All done
echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar
You can use the steps above put them in a shell executable file and execute this at any interval using crontab commands.
If you want to outsource the backup solution entirely, MongoDB Management Service takes snapshots every six hours. The default retention policy on the snapshots will allow you to get point-in-time restore for 24 hours, daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a year.
This FAQ has the full retention policy.
The backup service continually backs up your replica set by reading the oplog so the overhead is lower than full local periodic snapshots.
May be you can use automongobackup .
On the first point.
MongoDB has a term like 'durable write operation'. If journaling is enabled, you may only lose data that has not been writed in the log. This is a very small amount of time (100 milliseconds by default)
On the second and third points.
You can set master slave replication, but this will not protect you from data errors (for example, if important data is accidentally deleted). Therefore, you need to provide regular backups one way or another.
There are several approaches here:
LVM snapshot is a good solution if your filesystem supports it and the journaling is enabled in your mongoDB. For more details see here
mongodump -You can create a shell script that will run scheduled backups via cron and send them to storage. Here is an example of a good script
Backup as a Service. There are many solutions to make the backup for you.