I am trying to clone a SQL VM to another resource group,
Cloning a normal VM is simple,
Create disk snapshot (OS & data Disks)
Create Disk from snapshot
create VM from managed Disk
The image I have is (image: Sql Server 2019 Standard on Windows Server 2022-Gen2), following the above steps only creating a vm but not SQL Virtual Machine.
Please let me know if anyone knows the correct steps or any documentation.
Thanks in advance.
You can perform this activity through PowerShell. Please go through below steps-
To create a snapshot using the Azure portal, complete these steps, you can skip this if you have already created a snapshot.
In the Azure portal, select Create a resource.
Search for and select Snapshot.
In the Snapshot window, select Create. The Create snapshot window appears.
For Resource group, select an existing resource group or enter the name of a new one.
Enter a Name, then select a Region and Snapshot type for the new snapshot. If you would like to store your snapshot in zone-resilient storage, you need to select a region that supports availability zones. For a list of supporting regions, see Azure regions with availability zones.
For Source subscription, select the subscription that contains the managed disk to be backed up.
For Source disk, select the managed disk to snapshot.
For Storage type, select Standard HDD, unless you require zone-redundant storage or high-performance storage for your snapshot.
If needed, configure settings on the Encryption, Networking, and Tags tabs. Otherwise, default settings are used for your snapshot.
Select Review + create.
Moving snapshot of SQL Virtual Machine under different resource group in another subscription
PowerShell:
#Provide the subscription Id of the subscription where snapshot exists
$sourceSubscriptionId='yourSourceSubscriptionId'
#Provide the name of your resource group where snapshot exists
$sourceResourceGroupName='yourResourceGroupName'
#Provide the name of the snapshot
$snapshotName='yourSnapshotName'
#Set the context to the subscription Id where snapshot exists
Select-AzSubscription -SubscriptionId $sourceSubscriptionId
#Get the source snapshot
$snapshot= Get-AzSnapshot -ResourceGroupName $sourceResourceGroupName -Name $snapshotName
#Provide the subscription Id of the subscription where snapshot will be copied to
#If snapshot is copied to the same subscription then you can skip this step
$targetSubscriptionId='yourTargetSubscriptionId'
#Name of the resource group where snapshot will be copied to
$targetResourceGroupName='yourTargetResourceGroupName'
#Set the context to the subscription Id where snapshot will be copied to
#If snapshot is copied to the same subscription then you can skip this step
Select-AzSubscription -SubscriptionId $targetSubscriptionId
#Store your snapshots in Standard storage to reduce cost. Please use Standard_ZRS in regions where zone redundant storage (ZRS) is available, otherwise use Standard_LRS
#Please check out the availability of ZRS here: https://docs.microsoft.com/en-us/Az.Storage/common/storage-redundancy-zrs#support-coverage-and-regional-availability
$snapshotConfig = New-AzSnapshotConfig -SourceResourceId $snapshot.Id -Location $snapshot.Location -CreateOption Copy -SkuName Standard_LRS
#Create a new snapshot in the target subscription and resource group
New-AzSnapshot -Snapshot $snapshotConfig -SnapshotName $snapshotName -ResourceGroupName $targetResourceGroupName
Related
I accidentally dropped a column. I have no backup set up for this single node setup. Does cockroach have any auto backup mechanism or am I screwed?
We could use time-travel queries to restored deleted data within a garbage collection window before the data is deleted forever.
The garbage collection window is determined by the gc.ttlseconds field in the replication zone configuration.
Examples are:
SELECT name, balance
FROM accounts
AS OF SYSTEM TIME '2016-10-03 12:45:00'
WHERE name = 'Edna Barath`;
SELECT * FROM accounts AS OF SYSTEM TIME '-4h';
SELECT * FROM accounts AS OF SYSTEM TIME '-20m';
I noticed that managed CockroachDB run database backup (incremental or full) hourly up to 30 days. You may be able to restore the whole database from it.
Please note that the restoration will cause your cluster to be unavailable for the duration of the restored. All current data is deleted.
We can manage our own backup, including incremental, database and table level backup. We need to configure a userfile location or a cloud storage location. This require billing information.
CockroachDB stores old versions of data at least through its configured gc.ttlseconds window (default one day). There's no simple way that I know of to instantly restore, but you can do
SELECT * FROM <tablename> AS OF SYSTEM TIME <timestamp before dropping the column>
And then manually reinsert the data from there.
I have a simple query that takes old data from a table and inserts the data into another table for archiving.
DELETE FROM Events
OUTPUT DELETED.*
INTO ArchiveEvents
WHERE GETDATE()-90 > Events.event_time
I want this query to run daily.
As i currently understand, there is no SQL Server Agent when using Azure SQL-db. Thus SQL Server agent does not seem like the solution here.
What is the easiest/best solution to this using Azure SQL-db?
There are multiple ways to run automated scripts on Azure SQL Database as below:
Using Automation Account Runbooks.
Using Elastic Database Jobs in Azure
Using Azure Data factory.
As you are running just one script, I would suggest you to take a look into Automation Account Runbooks. As an example below, a PowerShell Runbook to execute the statement.
$database = #{
'ServerInstance' = 'servername.database.windows.net'
'Database' = 'databasename'
'Username' = 'uname'
'Password' = 'password'
'Query' = 'DELETE FROM Events OUTPUT DELETED.* INTO archieveevents'
}
Invoke -Sqlcmd #database
Then, it can be scheduled as needed:
You asked in part for a comparison of Elastic Jobs to Runbooks.
Elastic Jobs will also run a pre-determined SQL script against a
target set of servers/databases.
-Elastic jobs were built
internally for Azure SQL by Azure SQL engineers, so the technology is
supported at the same level of Azure SQL.
Elastic jobs can be defined and managed entirely through PowerShell scripts. However, they also support setup/configuration through TSQL.
Elastic Jobs are handy if you want to target many databases, as you set up the job one time, and set the targets and it will run everywhere at once. If you have many databases on a given server that would be good targets, you only need to specify the target
server, and all of the databases on the server are automatically targeted.
If you are adding/removing databases from a given server, and want to have the job dynamically adjust to this change, elastic jobs
is designed to do this seamlessly. You just have to configure
the job to the server, and every time it is run it will target
all (non excluded) databases on the server.
For reference, I am a Microsoft Employee who works in this space.
I have written a walkthrough and fuller explanation of elastic jobs in a blog series. Here is a link to the entry point of the series:https://techcommunity.microsoft.com/t5/azure-sql/elastic-jobs-in-azure-sql-database-what-and-why/ba-p/1177902
You can use Azure data factory, create a pipeline to execute SQL query and trigger it run every day. Azure data factory is used to move and transform data from Azure SQL or other storage.
I am trying to copy my existing AzureSQL (singleton) database from PRODUCTION subscription into a NON-PRODUCTION database in a different subscription. I need to repeat this process every night so that our production support environment (non-prod) has the latest copy from production from the night before. Since overwriting database is not possible in AzureSQL, I would like to copy "DBProd" from PROD server as "DBProd2" on to my Non-Prod server (in a different subscription), then delete the existing "DBProd" from the destination server and rename "DBProd2" to "DBProd".
I have searched through this site to find answers and the closest I found was this link below...
Cross Subscription Copying of Databases on Windows Azure SQL Database
In that link user #PaulH submitted the below answer...
"This works for me across subscriptions without having matching SQL Server accounts. I am a member of the server active directory admin group on both source and target servers, and connecting using AD authentication with MFA. – paulH Mar 25 at 11:22"
However, I could not figure out the details of how it was achieved. My preference is to use power shell script to get this done. If any of you had done this before, I would appreciate a snippet of sample code, or any pointers to achieve this.
My other option is to go the BACPAC route (export and import), but I would only want to resort to that if copying of DB across subscriptions is not possible.
Thanks in advnace!
Helios
Went through the link...
Cross Subscription Copying of Databases on Windows Azure SQL Database
The Move-AzureRmResource cmdlet may be all you need. Below how it works.
Let's say you create a new Azure SQL Server on a different resource group.
New-AzureSqlDatabaseServer -Location "East US" -AdministratorLogin "AdminLogin" -AdministratorLoginPassword "AdminPassword"
Copy the source database to the newly created Azure SQL Server.
Start-AzureSqlDatabaseCopy -ServerName "SourceServer" -DatabaseName "Orders" -PartnerServer "NewlyCreatedServer" -PartnerDatabase "OrdersCopy"
Move the resource group of the Newly created Azure SQL Server to another subscription.
Move-AzureRmResource -DestinationResourceGroupName [-DestinationSubscriptionId ] -ResourceId [-Force] [-ApiVersion ] [-Pre] [-DefaultProfile ] [-InformationAction ] [-InformationVariable ] [-WhatIf] [-Confirm] []
For more information, please read this DBAStackExchange thread.
I have some questions about the Azure Files Share snapshot, if you know something about that, please let me know. Thanks.
1, Where are the snapshots stored? Will it cost the storage capacity and how about the cost of creates and delete snapshots?
2, If my snapshot exceeds 200, what will it be? Deleted by itself or the new one can't be created?
3, May I delete the snapshot which I want by Azure Automation (use the runbook to schedules it)?
4, If I use Azure automation and Back up (Preview) to deploy the Azure FileShare snapshot together, which snapshot will I get?
If you know something about that, please share with us (even you can answer one of them, I will mark it as an answer).
Thanks so much for your help.
Just a quick answer to some of your question(for others, I will update later).
Some questions can be found here.
1.1 Where are the snapshots stored?
Share snapshots are stored in the same storage account as the file share.
1.2 Will it cost the storage capacity
As per this doc (The Space usage section) metioned: Snapshots don't count toward your 5-TB share limit. There is no limit to how much space share snapshots occupy in total. Storage account limits still apply.
This means that when you create a file share, there is a Quota option which let you specify the file max capacity(like 5 GB), if you total snapshots(like 10 GB) is larger than that max capacity, and don't worry, you can still save these snapshots, but the total snapshots capacity should less than your storage account's max capacity.
If my snapshot exceeds 200, what will it be? Deleted by itself or the new one can't be created?
if more than 200, an error will occur:
"Exception calling "Snapshot" with "0" argument(s): "The remote server returned an error: (409) Conflict.".
You can test it just using the following powershell code:
$context = New-AzureStorageContext -StorageAccountName your_accouont_name -StorageAccountKey your_account_key
$share = Get-AzureStorageShare -Context $context -Name s22
for($i=0;$i -le 201;$i++){
$share.snapshot();
start-sleep -Seconds 1
}
May I delete the snapshot which I want by Azure Automation (use the runbook to schedules it)?
This should be possible, I can test it at my side later then update to you.
And most of the snapshot operation commands can be found here, including delete.
update:
$s = Get-AzureStorageShare -Context $context -SnapshotTime 2018-12-17T06:05:38.0000000Z -Name s33
$s.Delete() #delete the snapshot
Note:
For -SnapshotTime, you can pass the snapshot name to it. As of now, the snapshot name is always auto assigned a UTC time value, like 2018-12-17T06:05:38.0000000Z
For -Name, pass the azure file share name
We have an Azure Hadoop HDI system where most of the files are stored in an Azure Storage Account Blob. Accessing the files from Hadoop requires the WASBS:// file system type.
I want to configure SQL 2016 Polybase to pushdown compute to the HDI cluster for certain queries of data stored in the Azure blobs.
It is possible to use Azure Blobs outside Hadoop in Polybase. I completely understand that the query hint "option (FORCE EXTERNALPUSHDOWN)" will not work on the Blob system.
Is it possible to configure an external data source to use HDI for compute on the blob?
A typical external data source configuration is:
CREATE EXTERNAL DATA SOURCE AzureStorage with (
TYPE = HADOOP,
LOCATION ='wasbs://clustername#storageaccount.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
);
I believe as long as that WASBS is in there, that pushdown compute will not work.
If I change the above to use HDFS, then I can certainly point to my HDI cluster, but then what would the LOCATION for the EXTERNAL TABLE be?
If this is in WASBS, then how would it be found in HDFS?
LOCATION='/HdiSamples/HdiSamples/MahoutMovieData/'
Surely there is a way to get Polybase to pushdown compute to an HDI cluster where the files are in WASBS. If not, then Polybase does not support the most common and recommended way to setup HDI.
I know the above is a lot to consider and any help is appreciated. If you are really sure it is not possible, just answer NO. Please remember though that I realize Polybase operating on Azure Blobs directly cannot pushdown compute. I want Polybase to connect to HDI and let HDI compute on the blob.
EDIT
Consider the following setup in Azure with HDI.
Note that the default Hadoop file-system is WASBS. That means using a relative path such as /HdiSamples/HdiSamples/MahoutMovieData/user-ratings.txt will resolve to wasbs://YourClusterName#YourStorageAccount.blob.core.windows.net/HdiSamples/HdiSamples/MahoutMovieData/user-ratings.txt.
CREATE EXTERNAL DATA SOURCE HadoopStorage with (
TYPE = HADOOP,
LOCATION ='hdfs://172.16.1.1:8020',
RESOURCE_MANAGER_LOCATION = '172.16.1.1:8050',
CREDENTIAL = AzureStorageCredential
);
CREATE EXTERNAL TABLE [user-ratings] (
Field1 bigint,
Field2 bigint,
Field3 bigint,
Field4 bigint
)
WITH ( LOCATION='/HdiSamples/HdiSamples/MahoutMovieData/user-ratings.txt',
DATA_SOURCE = HadoopStorage,
FILE_FORMAT = [TabFileFormat]
);
There are many rows in the file in Hadoop. Yet, this query returns 0.
select count(*) from [user-ratings]
When I check the Remote Query Execution plan, it shows:
<external_uri>hdfs://172.16.1.1:8020/HdiSamples/HdiSamples/MahoutMovieData/user-ratings.txt</external_uri>
Notice the URI is an absolute path and is set to HDFS based on the External Data Source.
The query succeeds and returns zero because it is looking for a file/path that does not exist in the HDFS file-system. "Table not found" is not returned in case of no table. That is normal. What is bad is the real table is stored in WASBS and has many rows.
What this all means is Pushdown Compute is not supported when using Azure Blobs as the Hadoop default file system. The recommended setup is to use Azure Blobs so that the storage is separate from compute. It makes no sense PolyBase would not support this setup, but as of now it appears not to support it.
I will leave this question up in case I am wrong. I really want to be wrong.
If you want PolyBase to pushdown computation to any hadoop/HDI cluster, you need to specify RESOURCE_MANAGER_LOCATION while creating the external data source. The RESOURCE_MANAGER_LOCATION tells SQL server where to submit a MR job.