I am trying to move my DB to Azure Managed Instance.
I would like to know what should I do about this.
I have the options of DB migration and DB restore to Azure managed instance.
Please let me know the difference between these 2 methods.
Normally you first do a full Backup and then migrate the Database.
If something go wrong then you Restore the Database.
But I don't know exactly how Azure M.I. works..
For database restore option; you can take a backup of your database and then you can restore that backup on the cloud if your application(s) or service(s) handle a moderate downtime or maybe you does not care about downtime. After the restoration you just need to redirect your database connections of your application(s) or service(s).
For the database migration; it is a little bit more complex. Generally, this option has been using for database systems which we do not want to take downtime on it. Also, for this option you will always need a database restoration(or initial load). After the restoration or inital load you just need your cloud database copy up to date. So, for this purpose the database logs which is generated on the source database systems are being applied on target database systems until you decide to switch cloud database systems.
PS: Of course, data volume is another critical aspects of database migration to the cloud.
Related
I have Azure SQL database. Before updating it (schema) as part of a dev ops pipeline, I'd like to take a snapshot so should the worst happen I can roll back.
Imagine my dismay when I discovered that snapshots aren't available for Azure SQL database.
What's the best practice here?
The moral equivalent you can do in SQL Azure is:
start an active geodr to the same region
let it seed and catch up
break the replication at the point to which you want the option to roll back in case of problems in your dev ops change.
This avoids the time to do a PITR restore (but you pay for the extra database for the duration it is alive).
I am in need of some clarification on the best SQL DB backup strategy with regards to Azure. We have developed, deployed and now are now completed with a short-term VS MVC Code First application, but I would like to 'backup' the database, blob storage assets etc...so a year from now we can re-deploy (build up) a working application quickly.
I have exported a .bacpac file and pulled it down local (2MB) but not sure if this file will be able to do a full DB restore in the future (I am not a DB guy per say). NOTE: we plan on disabling all servers/apps/databases/blob storage containers in Azure for this project since our client is unwilling to pay for long term storage or maintenance. So my concern is to make sure the type of backup file is not specific to a server or any other type of Azure dependency.
Ultimately all of these assets will be stored in our source control for usage a year from now.
Any advice/direction would be greatly appreciated. Its a little confusing with what seems to be several differing backup strategies and how they pertain to specific DB centric considerations (pros/cons).
Thanks in advance.
A .bacpac file contains all of the database schema and data and can be re-deployed later, to Azure SQL Database or SQL Server in a VM. You may also be able to recreate the database from your EF Code-First code through migrations and seeding.
I have two databases for my customers, a LIVE database and a TEST database. Periodically my customers would like the LIVE data copied over to the TEST database so it has current data that they can mess around with.
I am looking for a simple way that I could run maybe a command in my application that would copy all the data from one and move it into the other.
Currently I have to remote in with their IT department or consultant and restore from a backup of the LIVE to the TEST. I would prefer to have a button in my application that says RESTORE TEST and it will do that for me.
Any suggestions on how I could do this? Is it possible in SQL? Is there a tool out there in .NET that could help?
Thanks
If you have a backup plan, which I hope you do, you could simply restore the latest full .bak, if it is accessible to your application. However, this would require some tunneling for your application to access the latest backup file and this is generally a no-no for zones containing database servers.
Perhaps you could set up a scheduled delivery of a backup from machine to machine. Does the LIVE server have access to your TEST server. I wouldn't think that a DBA would be inclined to set up a delivery of backup unless it was to have a remote backup location for disaster recovery and that is usually to dedicated locations, not a testing server. Maybe you could workout a scenario where your TEST server doubles as an extra remote backup location for the production environment.
It would be better to be shipped a backup and then periodically or manually start a job to restore from a local backup. This would take the burden of your application. Then you would only need to simply kick of the sql job from within your app as needed.
We are not hosting our databases. Right now, One person is manually creating a .bak file from the production server. The .bak then copied to each developer's pc. Is there a better apporach that would make this process easier? I am working on build project right now for our team, I am thinking about adding the .bak file into SVN so each person has the correct local version? I had tried to generate a sql script but, it has no data just the schema?
Developers can't share a single dev database?
Adding the .bak file to SVN sounds bad. That's going to keep every version of it forever - you'd be better off (in most cases) leaving it on a network share visible by all developers and letting them copy it down.
You might want to use SSIS packages to let developers make ad hoc copies of production.
You might also be interested in the Data Publishing Wizard, an open source project that lets you script databases with their data. But I'd lean towards SSIS if developers need their own copy of the database.
If the production server has online connectivity to your site you can try the method called "log shipping".
This entails creating a baseline copy of your production database, then taking chunks of the transaction log written on the production server and applying the (actions contained in) the log chunks to your copy. This ensures that after a certain delay your backup database will be in the same state as the production database.
Detailed information can be found here: http://msdn.microsoft.com/en-us/library/ms187103.aspx
As you mentioned SQL 2008 among the tags: as far as I remember SQL2008 has some kind of automatism to set this up.
You can create a schedule back up and restore
You don't have to developer PC for backup, coz. SQL server has it's own back up folder you can use it.
Also you can have restore script generated for each PC from one location, if the developer want to hold the database in their local system.
RESTORE DATABASE [xxxdb] FROM
DISK = N'\xxxx\xxx\xxx\xxxx.bak'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO
Check out SQL Source Control from RedGate, it can be used to keep schema and data in sync with a source control repository (docs say supports SVN). It supports the datbase on a centrally deployed server, or many developer machines as well.
Scripting out the data probably won't be a fun time for everyone depending on how much data there is, but you can also select which tables you're going to do (like lookups) and populate any larger business entity tables using SSIS (or data generator for testing).
Using Oracle 10g with our testing server what is the most efficient/easy way to backup and restore a database to a static point, assuming that you always want to go back to the given point once a backup has been created.
A sample use case would be the following
install and configure all software
Modify data to the base testing point
take a backup somehow (this is part of the question, how to do this)
do testing
return to step 3 state (restore back to backup point, this is the other half of the question)
Optimally this would be completed through sqlplus or rman or some other scriptable method.
You do not need to take a backup at your base time. Just enable flashback database, create a guaranteed restore point, run your tests and flashback to the previously created restore point.
The steps for this would be:
Startup the instance in mount mode.
startup force mount;
Create the restore point.
create restore point before_test guarantee flashback database;
Open the database.
alter database open;
Run your tests.
Shutdown and mount the instance.
shutdown immediate;
startup mount;
Flashback to the restore point.
flashback database to restore point before_test;
Open the database.
alter database open;
You could use a feature in Oracle called Flashback which allows you to create a restore point, which you can easily jump back to after you've done testing.
Quoted from the site,
Flashback Database is like a 'rewind
button' for your database. It provides
database point in time recovery
without requiring a backup of the
database to first be restored. When
you eliminate the time it takes to
restore a database backup from tape,
database point in time recovery is
fast.
From my experience import/export is probably the way to go. Export creates a logical snapshot of your DB so you won't find it useful for big DBs or exacting performance requirements. However it works great for making snapshots and whatnot to use on a number of machines.
I used it on a rails project to get a prod snapshot that we could swap between developers for integration testing and we did the job within rake scripts. We wrote a small sqlplus script that destroyed the DB then imported the dump file over the top.
Some articles you may want to check:
OraFAQ Cheatsheet
Oracle Wiki
Oracle apparently don't like imp/exp any more in favour of data pump, when we used data pump we needed things we couldn't have (i.e. SYSDBA privileges we couldn't get in a shared environment). So take a look but don't be disheartened if data pump is not your bag, the old imp/exp are still there :)
I can't recommend RMAN for this kind of thing becuase RMAN takes a lot of setup and will need config in the DB (it also has its own catalog DB for backups which is a pain in the proverbial for a bare metal restore).
If you are using a filesystem that supports copy-on-write snapshots, you could set up the database to the state that you want. Then shut down everything and take a filesystem snapshot. Then go about your testing and when you're ready to start over you could roll back the snapshot. This might be simpler than other options, assuming you have a filesystem which supports snapshots.
#Michael Ridley solution is perfectly scriptable, and will work with any version of oracle.
This is exactly what I do, I have a script which runs weekly to
Rollback the file system
Apply production archive logs
Take new "Pre-Data-Masking" FS snapshot
Reset logs
Apply "preproduction" data masking.
Take new "Post-Data-Masking" snapshot (allows rollback to post masked data)
Open database
This allows us to keep our development databases close to our production database.
To do this I use ZFS.
This method can also be used for your applications, or even you entire "environment" (eg, you could "rollback" your entire environment with a single (scripted) command.
If you are running 10g though, the first thing you'd probably want to look into is Flashback, as its built into the database.