I have a VPS with OVH. There are two options in there, Automated Backup and Snapshot. What is the difference between both and which one should I enable so I don't lose the data and the configuration on the server. It took me quite some time to optimize my server so I don't want to go through that pain again. Plus, there's like 30GB of data uploaded. I don't want to risk that even.
This explains it: https://www.ovh.com/world/vps/backup-vps.xml
So basically the automated backup is done automatically everyday and replicated in 3 different sites to ensure nothing is lost.
Snapshot seems like you have a max of two different snapshot and that you should do them yourself (like a VM snapshot).
Related
We’re trying to migrate to Azure SQL, and have built a prod and test SQL server (using Azure Devops, Bicep and Powershell). We have a requirement for a manual process in an Azure Devops pipeline (this needs to be manual as we need a steady state in test when getting ready for a release) to copy the prod databases over the top of the test ones when we need to refresh the data. As the prod databases may not be consistent in the day, when this is triggered, the database we want to restore is as at 4am this morning.
We originally attempted this with a nightly pipeline that ran New-AzSqlDatabaseCopy to copy the prod databases to a serverless backup copy (I couldn’t use the elastic pool the test databases are sat in, as its at the limit of the number of databases it can hold) on the test server, we could then drop the test database and do a create as copy of to create the test database as needed. This worked really nicely in performance but resulted in us running up a massive bill (think six times the bill for the whole company), we’re still trying to understand why that is with the support team, but I suspect it’s to do with the interplay of the retention period of Azure deleted databases, and us doing a delete and restore every night.
Ideally, I’d like to do a restore from a point in time of the prod database, over the top of the existing database on the test server, but combinations of New-AzSqlDatabaseCopy and Restore-AzSqlDatabase don’t seem to be able to get me there. I’d also need to be sure that this approach wouldn’t slow down the prod databases or cost an excessive amount, and would be reasonably performant.
I’d be comfortable with detaching the backup from the restore, and running the backup step early every morning as a fallback, again as long as it didn’t cost an excessive amount.
In terms of speed, I’m not too fussed about how long the backup step costs as long as it’s detached from the restore, but ideally the restore step needs to be efficient as possible, as it puts our test instance out of action for the time it runs for.
Has anyone got to such a solution that works effectively and efficiently, any help greatfully recieved!
Sort of is the honest answer! We never worked out a way of doing it across two servers and Microsoft support ended up saying they didn't think it was feasible, but we got to a nice compromise.
We created a single server for both sets of databases, but placed them in two elastic pools. As the server is just a logical arrangement and the thing we wanted to protect against was overwhelming of compute, the elastic pools ring fenced the live compute nicely.
We could then do point in time restores from live into test using powershell to restore live from last night without the need to backup. This approach does mean that secrets are shared between the two, but it covered off our needs well.
I have data on several machines that I want to backup in away that I can restore to certain points in time.
From what I read Snapshot Replication achieves this (as opossed to back-up that clobbers previous results).
The main motivation is that if the data files are ransacked, and encoded, then if I just back-up I can end up in a state where the backed up files are also encrypted.
One way to do this is by using 2 Synology NAS machines where I can have:
rsync processes to back-up files from multiple machines into a NAS1
apply Snapshot Replication from NAS1 to NAS2
In this way, if the data is hijacked at certain point, I can restore the data to the last good state by restoring NAS2 to previous point in time.
I would like to know if:
Snapshot Replication is the way to go, or there are other solutions?
are there other ways to achieve Snapshot Replication, e.g. with single NAS?
I have an older Synology 2-Bay NAS DS213j.
Assuming that I buy a second, newer, NAS (e.g. DS220j), are the 2 NAS machines expected to work together?
Thanks
I found out that Hyper Backup can save snapshots in time, so I'm using it instead of Snapshot Replication
I've got two SQL Servers, one of these servers (Server A) is backing up transaction logs on some database and uploading them to the other (Server B). Unfortunately I have no access to Server A, I simply have to trust that it is doing its job of periodically uploading its transaction logs to Server B.
Now, suppose Server B needs to recover the database for whatever reason. Doing this will break its ability to receive further transaction log backups.
Is there any way to copy/branch/backup the restoring database, so I can have one version of it that will continue to apply the transaction logs, and one version that will be recovered for reading/writing?
Unfortunately you can't use snapshot to bring a log-shipping backup instance online. You might be able to do it if the data resides on a san where you can force a fast lun copy and then mount a second copy of it real quick. Even without a SAN you can basically, between log loads or while you let them stack up for a bit, offline the DB, copy the files, and then bring up the copied version. Ugly but it gets the job done.
If you can get both DBs involved up to 2012 then I'd recommend you read up on AlwaysOn Availability Groups. http://technet.microsoft.com/en-us/library/hh510230.aspx They are cool because you can leave the second copy online in read-only mode while it is mirroring, all the time. Thus the stupid, almost repetitive, name for what should have been called something simple like "Live Mirroring".
Also, questions like this might better be asked on one of the sister sites like http://ServerFault.com or https://dba.stackexchange.com/
Ok so for standard, non-mirrored databases, the transaction log is kept in check either simply by having the database in simple mode or by doing regular backups. We keep ours in simple as we have SAN snapshot backups taking place and there is no need for SQL backups.
We're now going to mirroring. I obviously no longer have the choice of simple mode and must use full. this obviously leads to large log files and the need for log backups. That's fine I can deal with that; a maintenance plan that takes a log backup and discards any previous ones. I realise that this backup is essentially useless without its predecessors but the SAN snapshots are doing the backups.
My question is...
a) Is there a way to truncate the log file of all processed rows without creating a backup? (as I can't use them anyway...)
b) A maintenance plan is local to a server and is not replicated across a mirrored pair. How should it be done on a mirrored setup? such that when the database fails over, the plan starts running on the new principal, but doesn't get upset when its a mirror?
Thanks
A. If your server is important enough to mirror it, why isn't it important enough to take transaction log backups? SAN snapshots are point-in-time images of just one point in time, but they don't give you the ability to stop at different points of time along the way. When your developers truncate a table, you want to replay all of the logs right up until that statement, and stop there. That's what transaction log backups are good for.
B. Set up a maintenance plan (or even better, T-SQL scripts like Ola Hallengren's at http://ola.hallengren.com) to back up all of the databases, but check the boxes to only back up the online ones. (Off the top of my head, not sure if that's an option in 2005 - might be 2008 only.) That way, you'll always get whatever ones happen to fail over.
Of course, keep in mind that you need to be careful with things like cleanup scripts and copying those backup files. If you have half of your t-log backups on one share and half on the other, it's tougher to restore.
a) no, you cannot truncate a log that is part of a mirrored database. backing the logs up is your best option. I have several databases that are setup with mirroring simply based on teh HA needs but DR is not required for various reasons. That seems to be your situation? I would really still recommend keeping the log backups for a period of time. No reason to kill a perfectly good recovery plan that is added by your HA strategy. :)
b) My own solutions for this are to have a secondary agent job that monitors based on the status of the mirror. If the mirror is found to change, the secondary job on teh mirror instance is enabled and if possible, the old principal is disabled. if the principal was down and it comes back up, the job is still disabled. the only way the jobs themselves would be switched back is the event of again, another forced failover.
I'm doing a webapp and need a backup plan. Here's what I've got so far:
nightly encrypted backup of the SQL database to Amazon S3 and my external drive (incremental if possible, not overly familiar with PostgreSQL yet, but that's another thread)
nightly backup of my Mercurial repo (which includes Apache configs, deploy scripts, etc) to S3 (w/ local backups via Time Machine)
Should I add anything else, or will this cover it? For a gauge of how critical the data is/would be, it's a project management app along the lines of Basecamp.
Weekly full backup of your database as well as nightly incremental ones as well perhaps?
It means that if one of your old incremental backups gets corrupted then you have lost less than a week of data.
Also, ensure you have a backup test plan to ensure your backups work. There are a lot of horror stories going around about this, from companies that have been doing backups for years, never testing them and then finding out none of them are any good once they need them. (I've also been at a company like this. Thankfully I spotted the backups weren't working before they were required and fixed the problems).
One of the best strategies that worked for me in the past was to have the "backup" process just be the same as the install process, i.e. we fully scripted in linux the server configuration, application creation, database setup, etc etc so a install would look like:
./install.sh [server] [application name]
and the backup/recovery
./install [server] [application name] -database [database backup file]
In terms of backup the database was backed up fully (MySQL database), by a cronjob
This pretty much ensured that the recovery was tested every time a new instance was deployed, and the scripts ended up being used also to move instances when hardware needed replacement, or when a given server was a getting too much load from a customer.
This was the setup for a Saas enterprise application that I worked a few years back, so we had full control of the servers.
I would if you can change from a incremental back up to a differential. If you have a incremental then you would have to apply the weekly full backup and then every incremental following that. If one of your incrementals fails early in the week, then all your subsequent backups will fail too.
However if you use a differential then each differential contains all the changes since the last back up. so even if one of the back ups failed earlier in the week you would still be able to recover fully if you have a sucessful recent backup.
I hope i am explaining this well!
:)