I'm using azure online backup vault for DPM long term retention. We're using about 14TB of Azure Geo redundant space.
I want to change the redundancy from Geo to Local, but it's greyed out. Is there a way to change it? This is because our usage forecast for DPM backup data was much lower than what is actually being used so its costing to much.
If this is not possible, Is there a way to migrate the complete Geo Redundant Vault to a Local Vault or the new Recovery Services vault?
Thanks Jon
After an item has been registered to the vault, the storage redundancy option is locked and cannot be modified. For more detailed information, please refer to this article.
It seems that It does not enable us to move protected items from an existing backup vault with Geo- redundant to another one new backup vault with locally-redundant. I recommend that you could create a new backup vault with locally-redundant and backup your DPM servers.
Related
I am trying to move my DB to Azure Managed Instance.
I would like to know what should I do about this.
I have the options of DB migration and DB restore to Azure managed instance.
Please let me know the difference between these 2 methods.
Normally you first do a full Backup and then migrate the Database.
If something go wrong then you Restore the Database.
But I don't know exactly how Azure M.I. works..
For database restore option; you can take a backup of your database and then you can restore that backup on the cloud if your application(s) or service(s) handle a moderate downtime or maybe you does not care about downtime. After the restoration you just need to redirect your database connections of your application(s) or service(s).
For the database migration; it is a little bit more complex. Generally, this option has been using for database systems which we do not want to take downtime on it. Also, for this option you will always need a database restoration(or initial load). After the restoration or inital load you just need your cloud database copy up to date. So, for this purpose the database logs which is generated on the source database systems are being applied on target database systems until you decide to switch cloud database systems.
PS: Of course, data volume is another critical aspects of database migration to the cloud.
I have a need to load data into Azure Hyperscale incrementally.
Source data is in Azure VM that has SQL server installed in it.
Source database is about 6Tb in size and has about 370 tables.
We need a way to get incremental changes in the last X amount of hours and sync them into the same database in Hyperscale.
Ideally, we would extend our database with the availability group setup but since Hyperscale does not support that, we need to find a way to keep these in sync.
Source database does have change data capture enabled.
The best on-line migration option is to use the Azure Database Migration Service (link) where the Online (continuous sync) migration support scenario (link) you need is supported:
The sync will essentially run in the background until completed while being able to access the data that has been migrated. I believe this is a continuous copy scenario and is not incremental. With PaaS database services, you do not have access to perform snapshot replication operations from external data sources. The Hyperscale instance is built upon snapshot replication but it currently only serves the hosted database functionality.
Regards,
Mike
We have a couple of Azure Elastic database pool with hundreds of databases. Now we want to enable the Transparent Data Encryption feature. It is not a server or a pool setting, but a database setting.
It would cost me a day clicking in the portal to enable the TDE for all individual databases. Is there a smarter way of doing this? Scripting or multi selecting, or something like that?
Thank you for any help!
All newly created SQL databases are encrypted by default using service-managed TDE. Existing databases before May 2017 and databases created through restore, geo-replication, and database copy are not encrypted by default. For more information, please read this documentation.
You can create an elastic job to perform an ALTER DATABASE SET ENCRYPTION OFF for all databases on a pool. Read more about it here.
SQL Azure Daily Automatic and Keep Unlimited Time
I am trying to do the azure tutorial: https://msdn.microsoft.com/en-us/library/f6899710-634e-425a-969d-8db1267e9471
But the STORAGE ACCOUNT dont show any option, and I already had a storage account.
Databases are now automatically backed up and you don't need to set this up, this is what I found with playing with it today:
Did you know we already backup your databases? You can restore
databases from automatic backups using the Point-In-Time Restore and
Geo-Restore capabilities. Learn more.
But if you need to maintain your own copies for longer periods of time: In manage.windowsazure.com, navigating to the DB and then "Configure" you will find this option.
I have the following set up:
Azure service
Azure SQL database
Azure Table Storage
Azure Blob Storage
I am trying to develop a backup strategy for this service.
The thing is, that SQL, Tables and BLOBs should be synced. In the backup all three of those have to be of the same version. (backups taken at the same moment). And the main problem is - I can only afford several minutes downtime, not more than that.
What should I do? May be there is existing solution?
Windows Azure Storage supports geo-replication for Blobs, Tables and Queues. Data in the storage account is made durable by replicating transactions across different storage nodes in the same region (LRS) or a secondary region (GRS). GRS is the default redundancy option when creating a storage account. Refer to http://blogs.msdn.com/b/windowsazurestorage/archive/2013/12/11/introducing-read-access-geo-replicated-storage-ra-grs-for-windows-azure-storage.aspx for more details.
If you want to build a custom backup solution then you could use the techniques suggested in the below 2 blogs
1) http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/30/protecting-your-blobs-against-application-errors.aspx
2) http://blogs.msdn.com/b/windowsazurestorage/archive/2010/05/03/protecting-your-tables-against-application-errors.aspx
I am not sure of the exact use case of why you need to backup azure table and blob. You can backup All the above services without downtime; might be there would be slight glitch or bottleneck performance with SQL database durning back.
The single shot answer is to write a custom script which would read the data from azure table ( or SQL database, or the required service ) make a archive (packaging) and store it back.
The important thing to note here is where would storage backups, broadly speaking generally store the archives in blob. In this case you have thing where you would be storing, if you are storing on-premises you need calculate upon the storage locally, out bandwidth cost and latency of the data transfer from azure.
PS : cloud storage by itself has good leave of availability and durability, you further improve these factors by enabling geo-replication