Corrupting a database backup to fail CHECKSUM - sql

I'm currently setting up the monitoring for some SQL servers running on windows and I want to test if it picks up any errors that occur when backups fail. I'm using the CHECKSUM option for backup validation.
Is it possible to corrupt the backup in a way so that the CHECKSUM validation fails?

CHECKSUM verifies the file is not corrupt in the same RESTORE or BACKUP command; can also be set at a database-level so it always runs a self-check. I don't know of a way to intentionally cause this to fail during a BACKUP command, unless you're corrupting it on the I/O level outside of SQL Server.
RESTORE HEADERONLY verifies the file can be restored completely; this can be performed any time after the backup has completed. Essentially, fake-attempts a full restore to ensure the file is a legit backup. More info here.

Related

"BACKUP DATABASE" to shared location throws an error

I'm writing a SQL Server stored procedure to backup a database into a network shared location.
The command as follows: (whole D drive has been shared)
BACKUP DATABASE MyDB
TO DISK = '\\192.168.1.50\d\Backups\MyDb_20200615-09.54.08.BAK'
The command is working fine for a local path, but for the shared path, it throws the following error:
Operating system error 1909(The referenced account is currently locked out and may not be logged on to.).
How to get rid of this error?
Thanks in advance.
Yeah, longstanding issue- UNC paths are a big pain with SQL server commands, and often not usable at all. Two possibilities:
Drop the backup onto a local disk and then copy it to your network path.
Map the drive. Note that there are significant and painful access issues because most SQL Server instances run as local SYSTEM and won't have the ability to access network drives.
Edit: the permissions issue is the reason you’re getting locked out. The SYSTEM account credentials won’t work on other machines. You need to create an account with matching credentials on both machines and run the SQL server instance as that account. This can have other implications. It’s easier (and possibly safer) to drop the backup to local disk and copy it with script credentials.

How to know if a file is completely written to disk in Windows. (With disk Write Cache policy disabled)

I am trying to take a backup log of a SQL database to a disk and then take a snapshot of the disk. To ensure that the write happens by surpassing the disk write cache, I have disabled the disk write cache by enabling Quick Removal policy so that the file would be directly written to the disk. The transaction file shows on the location but it is not showing up on the disk's snapshot, which was because the file had not yet been written completely to the disk yet. The OS thinks the file is accessible as I tried opening the file via powershell after the backup completed (and before the snapshot), and it was allowed (otherwise it should have shown that the file is being used by another process). Is there a method to find whether a file is completely written to the disk ?

Openerp during backup from interface says access denied

I restored database from backup and renamed it to original and deleted the faulty one. The system is working but backup says access denied.
I checked the process and the process that is running pg_dump goes stalled in zambi mode.
Any Idea?

SQL Server database backup file (.bak ) file gets corrupted

I have been working on a SQL Server database designing since long now and I have observed that when a .bak file is mailed or kept and downloaded from ftp site, it gets corrupted.
When I try to restore, it gives me 3013 error code with messsage:
"Backup or restore operation terminating abnormally.
I tried RESTORE VERIFYONLY FROM DISK='C:\abc.bak' as well but it says
VERIFY DATABASE is terminating abnormally.
Any idea why this is happening and also, is there a better way to move the database file from one server to another (I do not have the access to source server)
Thanks in advance.
For FTP make sure that you use binary mode.
Did you try to send plain attachment to yourself and compare the results with the files from sent and received email?
As mentioned here
You may need to repair your mdf file first using some tools. There are lot of tool available in the market. There is tool called Restore MS SQL Database which is very useful to repair the mdf files.
The issue might me because of corrupted transaction logs, you may use tool Restore MS SQL Database to repair your corrupted mdf file.

MongoDB backup plan

I want to switch from MySQL to MongoDB but great data losses (more than 1 hour) are not acceptable for me.
I need to have 3 backup plans:
Hourly backup plan. Data is flushed to disk every X minutes and if something wrong with the server I shall be sure that after reboot it will have all data at least for an hour ago. Can I configure it?
Daily backup plan. Data is synced to backup disk every day so even if server explodes I can recover data for yesterday in some hours. Should I use fsync, master-slave or something else? I would like to have minimal traffic so ideally only changes will be sent.
Weekly backup plan. Data is synced to second backup disk so if both server and first backup disk explode I have at least data for last week. Here this is the question of reliability so it's ok to send all data via network.
How can I do it?
The fsync command flushes the data to disk. It is executed each 60 seconds by default, but can be configured using the --syncdelay command line parameter.
The documentation on backups has some good pointers for daily and weekly backups. For the daily backup, a master-slave configuration seems like the best option, as it will only sync changes.
For the weekly backup you can also use a master-slave configuration, or replication. Another option is the mongodump utility, which will back-up the entire database. It is capable of creating backups while the database is running, so you can run it on the main database or one of the slaves. You can also lock the slave before backing it up.
Try this backup script if you want to create a backup from slave mongodb database to S3.
DB host (secondary preferred as to avoid impacting primary performance)
HOST='SomeHost/mongodbtest-slave'
DB name
DBNAME=***
S3 bucket name
BUCKET=*-backup
Linux user account
USER=ubuntu
Current time
TIME=/bin/date +%d-%m-%Y-%T
Password
PASSWORD=somePassword#!2*1
Username
USERNAME=someUsername
Backup directory
DEST=/home/ubuntu/tmp
Tar file of backup directory
TAR=$DEST/../$TIME.tar
Create backup dir (-p to avoid warning if already exists)
/bin/mkdir -p $DEST
Log
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
Dump from mongodb host into backup directory
mongodump --port 27017 -d DBNAME -u USERNAME -p PASSWORD -o $DEST
Create tar of backup directory
/bin/tar cvf $TAR -C $DEST .
Upload tar to s3
/usr/bin/aws s3 cp $TAR s3://$BUCKET/
Remove tar file locally
/bin/rm -f $TAR
Remove backup directory
/bin/rm -rf $DEST
All done
echo "Backup available at https://s3.amazonaws.com/$BUCKET/$TIME.tar
You can use the steps above put them in a shell executable file and execute this at any interval using crontab commands.
If you want to outsource the backup solution entirely, MongoDB Management Service takes snapshots every six hours. The default retention policy on the snapshots will allow you to get point-in-time restore for 24 hours, daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a year.
This FAQ has the full retention policy.
The backup service continually backs up your replica set by reading the oplog so the overhead is lower than full local periodic snapshots.
May be you can use automongobackup .
On the first point.
MongoDB has a term like 'durable write operation'. If journaling is enabled, you may only lose data that has not been writed in the log. This is a very small amount of time (100 milliseconds by default)
On the second and third points.
You can set master slave replication, but this will not protect you from data errors (for example, if important data is accidentally deleted). Therefore, you need to provide regular backups one way or another.
There are several approaches here:
LVM snapshot is a good solution if your filesystem supports it and the journaling is enabled in your mongoDB. For more details see here
mongodump -You can create a shell script that will run scheduled backups via cron and send them to storage. Here is an example of a good script
Backup as a Service. There are many solutions to make the backup for you.